Test Report: Docker_Linux_crio 21968

                    
                      c47dc458d63a230593369798adacaa3ab200078c:2025-11-23:42467
                    
                

Test fail (38/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 15.61
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 147.91
38 TestAddons/parallel/InspektorGadget 5.28
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 63.9
42 TestAddons/parallel/Headlamp 2.74
43 TestAddons/parallel/CloudSpanner 6.25
44 TestAddons/parallel/LocalPath 10.19
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
97 TestFunctional/parallel/ServiceCmdConnect 602.94
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.65
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.79
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.32
197 TestJSONOutput/unpause/Command 1.36
248 TestPreload 439.01
261 TestPause/serial/Pause 7.17
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.24
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.24
368 TestStartStop/group/old-k8s-version/serial/Pause 5.88
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.37
374 TestStartStop/group/no-preload/serial/Pause 7.59
378 TestStartStop/group/embed-certs/serial/Pause 8.35
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.99
389 TestStartStop/group/newest-cni/serial/Pause 5.7
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.93
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.123277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:24:53.524034   77330 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:24:53.524170   77330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:24:53.524182   77330 out.go:374] Setting ErrFile to fd 2...
	I1123 09:24:53.524187   77330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:24:53.524397   77330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:24:53.524694   77330 mustload.go:66] Loading cluster: addons-768607
	I1123 09:24:53.525032   77330 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:53.525053   77330 addons.go:622] checking whether the cluster is paused
	I1123 09:24:53.525166   77330 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:53.525183   77330 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:24:53.525614   77330 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:24:53.544452   77330 ssh_runner.go:195] Run: systemctl --version
	I1123 09:24:53.544515   77330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:24:53.561443   77330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:24:53.660431   77330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:24:53.660515   77330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:24:53.688786   77330 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:24:53.688807   77330 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:24:53.688811   77330 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:24:53.688815   77330 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:24:53.688817   77330 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:24:53.688823   77330 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:24:53.688826   77330 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:24:53.688829   77330 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:24:53.688831   77330 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:24:53.688838   77330 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:24:53.688841   77330 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:24:53.688844   77330 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:24:53.688846   77330 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:24:53.688850   77330 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:24:53.688860   77330 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:24:53.688870   77330 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:24:53.688875   77330 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:24:53.688879   77330 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:24:53.688882   77330 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:24:53.688884   77330 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:24:53.688889   77330 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:24:53.688892   77330 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:24:53.688894   77330 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:24:53.688897   77330 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:24:53.688900   77330 cri.go:89] found id: ""
	I1123 09:24:53.688950   77330 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:24:53.702854   77330 out.go:203] 
	W1123 09:24:53.704001   77330 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:24:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:24:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:24:53.704024   77330 out.go:285] * 
	* 
	W1123 09:24:53.708050   77330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:24:53.709407   77330 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.460843ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004236624s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003795863s
addons_test.go:392: (dbg) Run:  kubectl --context addons-768607 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-768607 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-768607 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.115715751s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable registry --alsologtostderr -v=1: exit status 11 (264.575254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:17.926661   80273 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:17.926938   80273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:17.926963   80273 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:17.926971   80273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:17.927276   80273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:17.927672   80273 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:17.928276   80273 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:17.928307   80273 addons.go:622] checking whether the cluster is paused
	I1123 09:25:17.928468   80273 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:17.928489   80273 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:17.928908   80273 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:17.948043   80273 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:17.948221   80273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:17.971428   80273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:18.074049   80273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:18.074188   80273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:18.104837   80273 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:18.104866   80273 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:18.104873   80273 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:18.104879   80273 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:18.104884   80273 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:18.104890   80273 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:18.104894   80273 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:18.104899   80273 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:18.104904   80273 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:18.104912   80273 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:18.104916   80273 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:18.104919   80273 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:18.104928   80273 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:18.104933   80273 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:18.104953   80273 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:18.104968   80273 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:18.104976   80273 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:18.104982   80273 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:18.104986   80273 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:18.104990   80273 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:18.104997   80273 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:18.105001   80273 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:18.105008   80273 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:18.105012   80273 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:18.105016   80273 cri.go:89] found id: ""
	I1123 09:25:18.105069   80273 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:18.119821   80273 out.go:203] 
	W1123 09:25:18.121331   80273 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:18.121349   80273 out.go:285] * 
	* 
	W1123 09:25:18.125405   80273 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:18.126803   80273 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.146919ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-768607
addons_test.go:332: (dbg) Run:  kubectl --context addons-768607 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.000416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:18.361537   80392 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:18.361831   80392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:18.361842   80392 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:18.361847   80392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:18.362023   80392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:18.362317   80392 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:18.362644   80392 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:18.362661   80392 addons.go:622] checking whether the cluster is paused
	I1123 09:25:18.362767   80392 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:18.362793   80392 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:18.363212   80392 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:18.381498   80392 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:18.381560   80392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:18.399431   80392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:18.500757   80392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:18.500843   80392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:18.531291   80392 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:18.531318   80392 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:18.531322   80392 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:18.531326   80392 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:18.531329   80392 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:18.531333   80392 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:18.531336   80392 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:18.531339   80392 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:18.531342   80392 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:18.531351   80392 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:18.531354   80392 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:18.531357   80392 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:18.531359   80392 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:18.531363   80392 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:18.531366   80392 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:18.531376   80392 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:18.531382   80392 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:18.531386   80392 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:18.531389   80392 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:18.531392   80392 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:18.531395   80392 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:18.531398   80392 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:18.531401   80392 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:18.531404   80392 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:18.531406   80392 cri.go:89] found id: ""
	I1123 09:25:18.531455   80392 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:18.546679   80392 out.go:203] 
	W1123 09:25:18.548427   80392 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:18.548449   80392 out.go:285] * 
	* 
	W1123 09:25:18.552429   80392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:18.553773   80392 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-768607 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-768607 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-768607 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [50557582-678c-4615-bcda-268162632162] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [50557582-678c-4615-bcda-268162632162] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003513073s
I1123 09:25:21.084591   67870 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.432660759s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-768607 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-768607
helpers_test.go:243: (dbg) docker inspect addons-768607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2",
	        "Created": "2025-11-23T09:23:02.86656893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69991,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:23:02.897619684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/hosts",
	        "LogPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2-json.log",
	        "Name": "/addons-768607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-768607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-768607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2",
	                "LowerDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-768607",
	                "Source": "/var/lib/docker/volumes/addons-768607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-768607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-768607",
	                "name.minikube.sigs.k8s.io": "addons-768607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ade245353e082f83fd8f44e41d063370cfe3240a56a17ac35203712ce7ac5053",
	            "SandboxKey": "/var/run/docker/netns/ade245353e08",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-768607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d05a2e060af245071e9e38162d3b1dfea063be4b3ecf7939f3ceb965fdb3a2a7",
	                    "EndpointID": "12a9efc2699fe940833b0219ad40f1acc062c309e4d3677f6f31c7e2141ecdba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "76:16:1b:7f:3a:95",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-768607",
	                        "6e966db2f1a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-768607 -n addons-768607
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-768607 logs -n 25: (1.133937708s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-045361 --alsologtostderr --binary-mirror http://127.0.0.1:36233 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045361 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ delete  │ -p binary-mirror-045361                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-045361 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ addons  │ enable dashboard -p addons-768607                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ addons  │ disable dashboard -p addons-768607                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ start   │ -p addons-768607 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-768607 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │                     │
	│ addons  │ addons-768607 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ enable headlamp -p addons-768607 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ ssh     │ addons-768607 ssh cat /opt/local-path-provisioner/pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-768607 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ ip      │ addons-768607 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-768607 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-768607                                                                                                                                                                                                                                                                                                                                                                                           │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-768607 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ ssh     │ addons-768607 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ addons-768607 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:26 UTC │                     │
	│ addons  │ addons-768607 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:26 UTC │                     │
	│ ip      │ addons-768607 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-768607        │ jenkins │ v1.37.0 │ 23 Nov 25 09:27 UTC │ 23 Nov 25 09:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:22:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:22:41.845178   69327 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:22:41.845442   69327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:41.845452   69327 out.go:374] Setting ErrFile to fd 2...
	I1123 09:22:41.845456   69327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:41.845647   69327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:22:41.846174   69327 out.go:368] Setting JSON to false
	I1123 09:22:41.846976   69327 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7503,"bootTime":1763882259,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:22:41.847027   69327 start.go:143] virtualization: kvm guest
	I1123 09:22:41.848606   69327 out.go:179] * [addons-768607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:22:41.849584   69327 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:22:41.849652   69327 notify.go:221] Checking for updates...
	I1123 09:22:41.851424   69327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:22:41.852541   69327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:22:41.853546   69327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:22:41.854398   69327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:22:41.855153   69327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:22:41.856138   69327 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:22:41.877291   69327 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:22:41.877405   69327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:41.935628   69327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:41.926466694 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:41.935738   69327 docker.go:319] overlay module found
	I1123 09:22:41.937697   69327 out.go:179] * Using the docker driver based on user configuration
	I1123 09:22:41.938581   69327 start.go:309] selected driver: docker
	I1123 09:22:41.938599   69327 start.go:927] validating driver "docker" against <nil>
	I1123 09:22:41.938611   69327 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:22:41.939144   69327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:41.996634   69327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:41.987036699 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:41.996880   69327 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:22:41.997172   69327 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:22:41.998547   69327 out.go:179] * Using Docker driver with root privileges
	I1123 09:22:41.999372   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:22:41.999451   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:22:41.999463   69327 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:22:41.999553   69327 start.go:353] cluster config:
	{Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 09:22:42.000586   69327 out.go:179] * Starting "addons-768607" primary control-plane node in "addons-768607" cluster
	I1123 09:22:42.001477   69327 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:22:42.002514   69327 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:22:42.003510   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:42.003538   69327 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:22:42.003546   69327 cache.go:65] Caching tarball of preloaded images
	I1123 09:22:42.003586   69327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:22:42.003622   69327 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:22:42.003633   69327 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:22:42.003963   69327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json ...
	I1123 09:22:42.003986   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json: {Name:mk172409a5230dba5b2cb2ce3fd515465b507f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:22:42.019536   69327 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:22:42.019669   69327 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:22:42.019686   69327 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 09:22:42.019691   69327 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 09:22:42.019702   69327 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 09:22:42.019712   69327 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 09:22:54.705848   69327 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 09:22:54.705888   69327 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:22:54.705938   69327 start.go:360] acquireMachinesLock for addons-768607: {Name:mkc7494b2a4d470d5bd9858d5c41d565f6324348 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:22:54.706041   69327 start.go:364] duration metric: took 80.772µs to acquireMachinesLock for "addons-768607"
	I1123 09:22:54.706065   69327 start.go:93] Provisioning new machine with config: &{Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:22:54.706158   69327 start.go:125] createHost starting for "" (driver="docker")
	I1123 09:22:54.707610   69327 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 09:22:54.707855   69327 start.go:159] libmachine.API.Create for "addons-768607" (driver="docker")
	I1123 09:22:54.707888   69327 client.go:173] LocalClient.Create starting
	I1123 09:22:54.708018   69327 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 09:22:54.740873   69327 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 09:22:55.010083   69327 cli_runner.go:164] Run: docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:22:55.028016   69327 cli_runner.go:211] docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:22:55.028109   69327 network_create.go:284] running [docker network inspect addons-768607] to gather additional debugging logs...
	I1123 09:22:55.028134   69327 cli_runner.go:164] Run: docker network inspect addons-768607
	W1123 09:22:55.043647   69327 cli_runner.go:211] docker network inspect addons-768607 returned with exit code 1
	I1123 09:22:55.043674   69327 network_create.go:287] error running [docker network inspect addons-768607]: docker network inspect addons-768607: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-768607 not found
	I1123 09:22:55.043699   69327 network_create.go:289] output of [docker network inspect addons-768607]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-768607 not found
	
	** /stderr **
	I1123 09:22:55.043811   69327 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:22:55.060754   69327 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f3020}
	I1123 09:22:55.060791   69327 network_create.go:124] attempt to create docker network addons-768607 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 09:22:55.060839   69327 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-768607 addons-768607
	I1123 09:22:55.105657   69327 network_create.go:108] docker network addons-768607 192.168.49.0/24 created
	I1123 09:22:55.105696   69327 kic.go:121] calculated static IP "192.168.49.2" for the "addons-768607" container
	I1123 09:22:55.105767   69327 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:22:55.121034   69327 cli_runner.go:164] Run: docker volume create addons-768607 --label name.minikube.sigs.k8s.io=addons-768607 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:22:55.138081   69327 oci.go:103] Successfully created a docker volume addons-768607
	I1123 09:22:55.138177   69327 cli_runner.go:164] Run: docker run --rm --name addons-768607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --entrypoint /usr/bin/test -v addons-768607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:22:58.609105   69327 cli_runner.go:217] Completed: docker run --rm --name addons-768607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --entrypoint /usr/bin/test -v addons-768607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (3.470859205s)
	I1123 09:22:58.609145   69327 oci.go:107] Successfully prepared a docker volume addons-768607
	I1123 09:22:58.609200   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:58.609216   69327 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:22:58.609304   69327 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-768607:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:23:02.790011   69327 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-768607:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.180629873s)
	I1123 09:23:02.790049   69327 kic.go:203] duration metric: took 4.180829291s to extract preloaded images to volume ...
	W1123 09:23:02.790376   69327 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:23:02.790430   69327 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:23:02.790486   69327 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:23:02.849460   69327 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-768607 --name addons-768607 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-768607 --network addons-768607 --ip 192.168.49.2 --volume addons-768607:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:23:03.165152   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Running}}
	I1123 09:23:03.183675   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.201623   69327 cli_runner.go:164] Run: docker exec addons-768607 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:23:03.248707   69327 oci.go:144] the created container "addons-768607" has a running status.
	I1123 09:23:03.248742   69327 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa...
	I1123 09:23:03.418239   69327 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:23:03.445982   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.472604   69327 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:23:03.472633   69327 kic_runner.go:114] Args: [docker exec --privileged addons-768607 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:23:03.523554   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.546116   69327 machine.go:94] provisionDockerMachine start ...
	I1123 09:23:03.546227   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.566920   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.567214   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.567241   69327 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:23:03.712999   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-768607
	
	I1123 09:23:03.713058   69327 ubuntu.go:182] provisioning hostname "addons-768607"
	I1123 09:23:03.713194   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.732920   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.733239   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.733302   69327 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-768607 && echo "addons-768607" | sudo tee /etc/hostname
	I1123 09:23:03.887999   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-768607
	
	I1123 09:23:03.888115   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.905952   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.906210   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.906235   69327 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-768607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-768607/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-768607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:23:04.050203   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:23:04.050248   69327 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 09:23:04.050324   69327 ubuntu.go:190] setting up certificates
	I1123 09:23:04.050354   69327 provision.go:84] configureAuth start
	I1123 09:23:04.050441   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.067805   69327 provision.go:143] copyHostCerts
	I1123 09:23:04.067923   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 09:23:04.068045   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 09:23:04.068130   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 09:23:04.068197   69327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.addons-768607 san=[127.0.0.1 192.168.49.2 addons-768607 localhost minikube]
	I1123 09:23:04.159128   69327 provision.go:177] copyRemoteCerts
	I1123 09:23:04.159193   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:23:04.159233   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.176940   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.278711   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:23:04.298581   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:23:04.316897   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:23:04.334219   69327 provision.go:87] duration metric: took 283.834823ms to configureAuth
	I1123 09:23:04.334251   69327 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:23:04.334561   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:04.334724   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.352792   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:04.353071   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:04.353115   69327 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:23:04.636919   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:23:04.636946   69327 machine.go:97] duration metric: took 1.090791665s to provisionDockerMachine
	I1123 09:23:04.636958   69327 client.go:176] duration metric: took 9.929061873s to LocalClient.Create
	I1123 09:23:04.636978   69327 start.go:167] duration metric: took 9.92912503s to libmachine.API.Create "addons-768607"
	I1123 09:23:04.636993   69327 start.go:293] postStartSetup for "addons-768607" (driver="docker")
	I1123 09:23:04.637006   69327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:23:04.637062   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:23:04.637122   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.654110   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.757065   69327 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:23:04.760730   69327 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:23:04.760756   69327 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:23:04.760769   69327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 09:23:04.760829   69327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 09:23:04.760853   69327 start.go:296] duration metric: took 123.854136ms for postStartSetup
	I1123 09:23:04.761182   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.778522   69327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json ...
	I1123 09:23:04.778814   69327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:23:04.778871   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.797143   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.895204   69327 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:23:04.899849   69327 start.go:128] duration metric: took 10.193673517s to createHost
	I1123 09:23:04.899877   69327 start.go:83] releasing machines lock for "addons-768607", held for 10.193824633s
	I1123 09:23:04.899951   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.916463   69327 ssh_runner.go:195] Run: cat /version.json
	I1123 09:23:04.916503   69327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:23:04.916523   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.916572   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.934644   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.935936   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:05.088006   69327 ssh_runner.go:195] Run: systemctl --version
	I1123 09:23:05.094414   69327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:23:05.128787   69327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:23:05.133338   69327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:23:05.133391   69327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:23:05.158905   69327 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:23:05.158928   69327 start.go:496] detecting cgroup driver to use...
	I1123 09:23:05.158963   69327 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:23:05.159017   69327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:23:05.174583   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:23:05.186476   69327 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:23:05.186539   69327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:23:05.202722   69327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:23:05.219530   69327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:23:05.298512   69327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:23:05.382299   69327 docker.go:234] disabling docker service ...
	I1123 09:23:05.382367   69327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:23:05.400047   69327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:23:05.412281   69327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:23:05.495822   69327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:23:05.576005   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:23:05.588612   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:23:05.602458   69327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:23:05.602511   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.612805   69327 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:23:05.612869   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.621707   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.630359   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.638875   69327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:23:05.646785   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.655542   69327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.668796   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.677299   69327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:23:05.684501   69327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 09:23:05.684578   69327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 09:23:05.696079   69327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:23:05.703336   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:05.783105   69327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:23:05.916567   69327 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:23:05.916641   69327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:23:05.920552   69327 start.go:564] Will wait 60s for crictl version
	I1123 09:23:05.920616   69327 ssh_runner.go:195] Run: which crictl
	I1123 09:23:05.923971   69327 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:23:05.948178   69327 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:23:05.948275   69327 ssh_runner.go:195] Run: crio --version
	I1123 09:23:05.975840   69327 ssh_runner.go:195] Run: crio --version
	I1123 09:23:06.004747   69327 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:23:06.005848   69327 cli_runner.go:164] Run: docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:23:06.021825   69327 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:23:06.025743   69327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:23:06.035593   69327 kubeadm.go:884] updating cluster {Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:23:06.035745   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:23:06.035798   69327 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:23:06.065768   69327 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:23:06.065794   69327 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:23:06.065842   69327 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:23:06.090810   69327 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:23:06.090832   69327 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:23:06.090842   69327 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 09:23:06.090934   69327 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-768607 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:23:06.091004   69327 ssh_runner.go:195] Run: crio config
	I1123 09:23:06.136226   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:23:06.136252   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:23:06.136274   69327 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:23:06.136305   69327 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-768607 NodeName:addons-768607 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:23:06.136457   69327 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-768607"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:23:06.136530   69327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:23:06.144621   69327 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:23:06.144704   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:23:06.152199   69327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:23:06.164411   69327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:23:06.179276   69327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 09:23:06.191428   69327 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:23:06.194885   69327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:23:06.204489   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:06.281510   69327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:23:06.305442   69327 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607 for IP: 192.168.49.2
	I1123 09:23:06.305468   69327 certs.go:195] generating shared ca certs ...
	I1123 09:23:06.305487   69327 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.305624   69327 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 09:23:06.392514   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt ...
	I1123 09:23:06.392545   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt: {Name:mkb0b2f20c82c92a595b06060c9b28d59726abb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.392711   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key ...
	I1123 09:23:06.392722   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key: {Name:mk0e916e50a2a76a994240de1927c80f62fdb3ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.392795   69327 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 09:23:06.466894   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt ...
	I1123 09:23:06.466923   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt: {Name:mkf6247adea6b984cc4f63b3f8a2487a7fd6e5f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.467082   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key ...
	I1123 09:23:06.467106   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key: {Name:mkd55ce5b37dd005e47af224f829fd3cd6df381e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.467180   69327 certs.go:257] generating profile certs ...
	I1123 09:23:06.467255   69327 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key
	I1123 09:23:06.467271   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt with IP's: []
	I1123 09:23:06.629150   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt ...
	I1123 09:23:06.629184   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: {Name:mk6e6fbdb023797ced59d7c2fefde3822f09ba65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.629351   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key ...
	I1123 09:23:06.629363   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key: {Name:mk1e879382e5b1ad328d77fd893a51a75b477bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.629434   69327 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45
	I1123 09:23:06.629457   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 09:23:06.756051   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 ...
	I1123 09:23:06.756084   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45: {Name:mkc56f10ea3bbeb10badaf9747f7867d6936e98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.756256   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45 ...
	I1123 09:23:06.756269   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45: {Name:mk438501f50e865a11b9d5fbb813ea11f0ed7beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.756337   69327 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt
	I1123 09:23:06.756411   69327 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key
	I1123 09:23:06.756461   69327 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key
	I1123 09:23:06.756481   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt with IP's: []
	I1123 09:23:06.773293   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt ...
	I1123 09:23:06.773317   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt: {Name:mk4e53bb9dce8aa26c68ece66b65e11396e99a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.773446   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key ...
	I1123 09:23:06.773456   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key: {Name:mk24afcff5b8283bc06b53a25a5501bfd9b6a1bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.773614   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 09:23:06.773649   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:23:06.773674   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:23:06.773699   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 09:23:06.774278   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:23:06.792437   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:23:06.809224   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:23:06.826226   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:23:06.842879   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:23:06.859663   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:23:06.876306   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:23:06.892887   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:23:06.909695   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:23:06.927886   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:23:06.939758   69327 ssh_runner.go:195] Run: openssl version
	I1123 09:23:06.945661   69327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:23:06.955893   69327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.959316   69327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.959369   69327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.993189   69327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:23:07.001970   69327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:23:07.005411   69327 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:23:07.005477   69327 kubeadm.go:401] StartCluster: {Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:07.005581   69327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:23:07.005638   69327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:23:07.031748   69327 cri.go:89] found id: ""
	I1123 09:23:07.031818   69327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:23:07.039684   69327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:23:07.047505   69327 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:23:07.047563   69327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:23:07.055062   69327 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:23:07.055080   69327 kubeadm.go:158] found existing configuration files:
	
	I1123 09:23:07.055147   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:23:07.062466   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:23:07.062520   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:23:07.069889   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:23:07.077584   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:23:07.077641   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:23:07.084751   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:23:07.092127   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:23:07.092191   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:23:07.099537   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:23:07.107067   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:23:07.107138   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:23:07.114240   69327 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:23:07.150396   69327 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:23:07.150479   69327 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:23:07.183101   69327 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:23:07.183197   69327 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:23:07.183251   69327 kubeadm.go:319] OS: Linux
	I1123 09:23:07.183326   69327 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:23:07.183386   69327 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:23:07.183453   69327 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:23:07.183522   69327 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:23:07.183592   69327 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:23:07.183667   69327 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:23:07.183733   69327 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:23:07.183804   69327 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:23:07.239833   69327 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:23:07.239999   69327 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:23:07.240147   69327 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:23:07.247511   69327 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:23:07.249229   69327 out.go:252]   - Generating certificates and keys ...
	I1123 09:23:07.249319   69327 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:23:07.249383   69327 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:23:07.654184   69327 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:23:07.938866   69327 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:23:08.066210   69327 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:23:08.152082   69327 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:23:08.273989   69327 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:23:08.274130   69327 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-768607 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 09:23:08.598691   69327 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:23:08.598864   69327 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-768607 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 09:23:08.990245   69327 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:23:09.208570   69327 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:23:09.461890   69327 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:23:09.461969   69327 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:23:09.596030   69327 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:23:10.494405   69327 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:23:10.719186   69327 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:23:10.809276   69327 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:23:11.417821   69327 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:23:11.418434   69327 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:23:11.421971   69327 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:23:11.423330   69327 out.go:252]   - Booting up control plane ...
	I1123 09:23:11.423446   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:23:11.423556   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:23:11.424260   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:23:11.438487   69327 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:23:11.438617   69327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:23:11.444841   69327 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:23:11.445150   69327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:23:11.445214   69327 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:23:11.538119   69327 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:23:11.538296   69327 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:23:12.039796   69327 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.810666ms
	I1123 09:23:12.042527   69327 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:23:12.042652   69327 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 09:23:12.042778   69327 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:23:12.042848   69327 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:23:13.692589   69327 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.649903678s
	I1123 09:23:13.841728   69327 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.799124192s
	I1123 09:23:15.544311   69327 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609481s
	I1123 09:23:15.554895   69327 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:23:15.565328   69327 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:23:15.572967   69327 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:23:15.573188   69327 kubeadm.go:319] [mark-control-plane] Marking the node addons-768607 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:23:15.580051   69327 kubeadm.go:319] [bootstrap-token] Using token: 4hjpo5.joyzmp41y87gwlxq
	I1123 09:23:15.582107   69327 out.go:252]   - Configuring RBAC rules ...
	I1123 09:23:15.582244   69327 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:23:15.586320   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:23:15.590869   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:23:15.593804   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:23:15.596118   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:23:15.598300   69327 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:23:15.949911   69327 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:23:16.362448   69327 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:23:16.950458   69327 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:23:16.951594   69327 kubeadm.go:319] 
	I1123 09:23:16.951715   69327 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:23:16.951734   69327 kubeadm.go:319] 
	I1123 09:23:16.951822   69327 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:23:16.951833   69327 kubeadm.go:319] 
	I1123 09:23:16.951873   69327 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:23:16.951948   69327 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:23:16.951998   69327 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:23:16.952005   69327 kubeadm.go:319] 
	I1123 09:23:16.952078   69327 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:23:16.952108   69327 kubeadm.go:319] 
	I1123 09:23:16.952150   69327 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:23:16.952157   69327 kubeadm.go:319] 
	I1123 09:23:16.952215   69327 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:23:16.952332   69327 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:23:16.952431   69327 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:23:16.952442   69327 kubeadm.go:319] 
	I1123 09:23:16.952556   69327 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:23:16.952659   69327 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:23:16.952674   69327 kubeadm.go:319] 
	I1123 09:23:16.952791   69327 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4hjpo5.joyzmp41y87gwlxq \
	I1123 09:23:16.952910   69327 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 09:23:16.952935   69327 kubeadm.go:319] 	--control-plane 
	I1123 09:23:16.952941   69327 kubeadm.go:319] 
	I1123 09:23:16.953018   69327 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:23:16.953034   69327 kubeadm.go:319] 
	I1123 09:23:16.953162   69327 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4hjpo5.joyzmp41y87gwlxq \
	I1123 09:23:16.953264   69327 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 09:23:16.955314   69327 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:23:16.955466   69327 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:23:16.955506   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:23:16.955525   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:23:16.956711   69327 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:23:16.957740   69327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:23:16.962022   69327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:23:16.962040   69327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:23:16.974584   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:23:17.166897   69327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:23:17.166988   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:17.167001   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-768607 minikube.k8s.io/updated_at=2025_11_23T09_23_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=addons-768607 minikube.k8s.io/primary=true
	I1123 09:23:17.176132   69327 ops.go:34] apiserver oom_adj: -16
	I1123 09:23:17.236581   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:17.736781   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:18.236921   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:18.737583   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:19.237627   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:19.737060   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:20.237135   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:20.737532   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.237306   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.736925   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.802949   69327 kubeadm.go:1114] duration metric: took 4.63603113s to wait for elevateKubeSystemPrivileges
	I1123 09:23:21.802997   69327 kubeadm.go:403] duration metric: took 14.79752545s to StartCluster
	I1123 09:23:21.803023   69327 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:21.803156   69327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:23:21.803634   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:21.803836   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:23:21.803862   69327 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:23:21.803926   69327 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 09:23:21.804068   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:21.804117   69327 addons.go:70] Setting ingress-dns=true in profile "addons-768607"
	I1123 09:23:21.804115   69327 addons.go:70] Setting default-storageclass=true in profile "addons-768607"
	I1123 09:23:21.804129   69327 addons.go:70] Setting gcp-auth=true in profile "addons-768607"
	I1123 09:23:21.804138   69327 addons.go:70] Setting registry-creds=true in profile "addons-768607"
	I1123 09:23:21.804141   69327 addons.go:239] Setting addon ingress-dns=true in "addons-768607"
	I1123 09:23:21.804082   69327 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-768607"
	I1123 09:23:21.804152   69327 addons.go:239] Setting addon registry-creds=true in "addons-768607"
	I1123 09:23:21.804155   69327 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-768607"
	I1123 09:23:21.804162   69327 mustload.go:66] Loading cluster: addons-768607
	I1123 09:23:21.804168   69327 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-768607"
	I1123 09:23:21.804180   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804189   69327 addons.go:70] Setting storage-provisioner=true in profile "addons-768607"
	I1123 09:23:21.804198   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804200   69327 addons.go:239] Setting addon storage-provisioner=true in "addons-768607"
	I1123 09:23:21.804192   69327 addons.go:70] Setting metrics-server=true in profile "addons-768607"
	I1123 09:23:21.804216   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804224   69327 addons.go:239] Setting addon metrics-server=true in "addons-768607"
	I1123 09:23:21.804250   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804253   69327 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-768607"
	I1123 09:23:21.804279   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804410   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:21.804675   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804721   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804748   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804751   69327 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-768607"
	I1123 09:23:21.804751   69327 addons.go:70] Setting inspektor-gadget=true in profile "addons-768607"
	I1123 09:23:21.804764   69327 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-768607"
	I1123 09:23:21.804767   69327 addons.go:239] Setting addon inspektor-gadget=true in "addons-768607"
	I1123 09:23:21.804782   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804792   69327 addons.go:70] Setting registry=true in profile "addons-768607"
	I1123 09:23:21.804807   69327 addons.go:239] Setting addon registry=true in "addons-768607"
	I1123 09:23:21.804824   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.805220   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.805255   69327 addons.go:70] Setting volcano=true in profile "addons-768607"
	I1123 09:23:21.805271   69327 addons.go:239] Setting addon volcano=true in "addons-768607"
	I1123 09:23:21.805301   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.805703   69327 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-768607"
	I1123 09:23:21.805727   69327 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-768607"
	I1123 09:23:21.806004   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.806097   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804181   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.806704   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.806883   69327 addons.go:70] Setting volumesnapshots=true in profile "addons-768607"
	I1123 09:23:21.806900   69327 addons.go:239] Setting addon volumesnapshots=true in "addons-768607"
	I1123 09:23:21.806936   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.807225   69327 addons.go:70] Setting cloud-spanner=true in profile "addons-768607"
	I1123 09:23:21.807260   69327 addons.go:239] Setting addon cloud-spanner=true in "addons-768607"
	I1123 09:23:21.807288   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.807437   69327 out.go:179] * Verifying Kubernetes components...
	I1123 09:23:21.804145   69327 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-768607"
	I1123 09:23:21.804074   69327 addons.go:70] Setting yakd=true in profile "addons-768607"
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804109   69327 addons.go:70] Setting ingress=true in profile "addons-768607"
	I1123 09:23:21.807713   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.809856   69327 addons.go:239] Setting addon ingress=true in "addons-768607"
	I1123 09:23:21.810036   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804783   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.810702   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.810796   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.811969   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.812206   69327 addons.go:239] Setting addon yakd=true in "addons-768607"
	I1123 09:23:21.813107   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.813497   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:21.817998   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.818976   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.819966   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.858486   69327 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 09:23:21.860948   69327 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 09:23:21.863845   69327 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 09:23:21.863868   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 09:23:21.863958   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.869604   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 09:23:21.873201   69327 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 09:23:21.876702   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 09:23:21.876774   69327 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:23:21.876809   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 09:23:21.876915   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.878718   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 09:23:21.881132   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.886285   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 09:23:21.887361   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 09:23:21.890459   69327 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 09:23:21.891448   69327 addons.go:239] Setting addon default-storageclass=true in "addons-768607"
	I1123 09:23:21.891497   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.891624   69327 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-768607"
	I1123 09:23:21.891654   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.891714   69327 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:23:21.891729   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 09:23:21.891796   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.891984   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 09:23:21.892096   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.892130   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.893277   69327 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:23:21.894835   69327 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:23:21.894855   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:23:21.894918   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.895898   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 09:23:21.896874   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 09:23:21.897964   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 09:23:21.897987   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 09:23:21.898049   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	W1123 09:23:21.908848   69327 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 09:23:21.913120   69327 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 09:23:21.914202   69327 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 09:23:21.914226   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 09:23:21.914291   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.921579   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 09:23:21.922014   69327 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 09:23:21.924127   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:21.924194   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 09:23:21.924206   69327 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 09:23:21.924283   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.925941   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:21.929884   69327 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:23:21.929908   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 09:23:21.930009   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.930219   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 09:23:21.931534   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 09:23:21.931554   69327 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 09:23:21.931661   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.933667   69327 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 09:23:21.934731   69327 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 09:23:21.935667   69327 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 09:23:21.935691   69327 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:23:21.935707   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 09:23:21.935766   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.936642   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:23:21.936666   69327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:23:21.936715   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.936712   69327 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:23:21.936757   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 09:23:21.936820   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.949707   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.953374   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.970621   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.971898   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.973630   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.974876   69327 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 09:23:21.976502   69327 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:23:21.976523   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 09:23:21.976574   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.976967   69327 out.go:179]   - Using image docker.io/busybox:stable
	I1123 09:23:21.978115   69327 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 09:23:21.979222   69327 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:23:21.979241   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 09:23:21.979294   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.979312   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.981929   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.982070   69327 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:23:21.983134   69327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:23:21.983570   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.989925   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.992193   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.992685   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:23:21.999903   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.005184   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.012459   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	W1123 09:23:22.013028   69327 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 09:23:22.013082   69327 retry.go:31] will retry after 148.455388ms: ssh: handshake failed: EOF
	I1123 09:23:22.017714   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	W1123 09:23:22.019072   69327 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 09:23:22.019110   69327 retry.go:31] will retry after 210.280055ms: ssh: handshake failed: EOF
	I1123 09:23:22.030609   69327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:23:22.032441   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.034511   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.103848   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:23:22.137018   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 09:23:22.148398   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:23:22.151877   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 09:23:22.151899   69327 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 09:23:22.155953   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 09:23:22.156041   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 09:23:22.157358   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:23:22.169801   69327 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 09:23:22.169826   69327 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 09:23:22.171671   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 09:23:22.171696   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 09:23:22.176935   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:23:22.197626   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:23:22.198777   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:23:22.200881   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 09:23:22.200903   69327 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 09:23:22.201544   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:23:22.206593   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:23:22.211899   69327 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:23:22.211916   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 09:23:22.212015   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 09:23:22.212022   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 09:23:22.224795   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 09:23:22.224820   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 09:23:22.235741   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 09:23:22.235827   69327 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 09:23:22.270242   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 09:23:22.270272   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 09:23:22.275199   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:23:22.276727   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 09:23:22.276744   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 09:23:22.308451   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:23:22.308473   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 09:23:22.336747   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 09:23:22.336852   69327 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 09:23:22.354428   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 09:23:22.354459   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 09:23:22.354871   69327 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 09:23:22.355690   69327 node_ready.go:35] waiting up to 6m0s for node "addons-768607" to be "Ready" ...
	I1123 09:23:22.356378   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:23:22.363932   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:23:22.404808   69327 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:22.404835   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 09:23:22.423709   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 09:23:22.423807   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 09:23:22.468580   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:22.476779   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 09:23:22.476959   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 09:23:22.510190   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:23:22.510287   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 09:23:22.536548   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 09:23:22.536602   69327 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 09:23:22.571991   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:23:22.572021   69327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:23:22.599055   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 09:23:22.599078   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 09:23:22.611834   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:23:22.611861   69327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:23:22.651680   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:23:22.655555   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 09:23:22.655579   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 09:23:22.685491   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:23:22.685520   69327 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 09:23:22.719907   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:23:22.883867   69327 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-768607" context rescaled to 1 replicas
	I1123 09:23:23.108893   69327 addons.go:495] Verifying addon registry=true in "addons-768607"
	I1123 09:23:23.112256   69327 out.go:179] * Verifying registry addon...
	I1123 09:23:23.115131   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 09:23:23.119912   69327 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 09:23:23.119940   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:23.445164   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.088748513s)
	I1123 09:23:23.445222   69327 addons.go:495] Verifying addon ingress=true in "addons-768607"
	I1123 09:23:23.445277   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.081261852s)
	I1123 09:23:23.446545   69327 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-768607 service yakd-dashboard -n yakd-dashboard
	
	I1123 09:23:23.446549   69327 out.go:179] * Verifying ingress addon...
	I1123 09:23:23.449949   69327 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 09:23:23.452400   69327 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 09:23:23.452424   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:23.618494   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:23.788027   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319404953s)
	W1123 09:23:23.788073   69327 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:23:23.788120   69327 retry.go:31] will retry after 241.77724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:23:23.788131   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.136414011s)
	I1123 09:23:23.788173   69327 addons.go:495] Verifying addon metrics-server=true in "addons-768607"
	I1123 09:23:23.788362   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.068416328s)
	I1123 09:23:23.788388   69327 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-768607"
	I1123 09:23:23.790468   69327 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 09:23:23.792480   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 09:23:23.794814   69327 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 09:23:23.794830   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:23.953638   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.030629   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:24.118969   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:24.296775   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:24.357872   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:24.453803   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.619110   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:24.794928   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:24.952765   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.117872   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:25.295567   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:25.453772   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.619117   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:25.796198   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:25.953269   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.118804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:26.295341   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:26.358524   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:26.453490   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.460990   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.430320989s)
	I1123 09:23:26.618407   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:26.795819   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:26.953223   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.118737   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:27.295794   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:27.452741   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.618906   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:27.796338   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:27.953034   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.118340   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:28.294985   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:28.359066   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:28.453114   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.618284   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:28.795647   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:28.953546   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.118002   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:29.295757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:29.453484   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.497584   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 09:23:29.497660   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:29.514938   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:29.618149   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:29.626524   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 09:23:29.638141   69327 addons.go:239] Setting addon gcp-auth=true in "addons-768607"
	I1123 09:23:29.638195   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:29.638543   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:29.656462   69327 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 09:23:29.656512   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:29.672854   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:29.770697   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:29.771763   69327 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 09:23:29.772693   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 09:23:29.772707   69327 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 09:23:29.785893   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 09:23:29.785912   69327 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 09:23:29.795636   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:29.798659   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:23:29.798674   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 09:23:29.810716   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:23:29.952842   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.118138   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:30.122589   69327 addons.go:495] Verifying addon gcp-auth=true in "addons-768607"
	I1123 09:23:30.123782   69327 out.go:179] * Verifying gcp-auth addon...
	I1123 09:23:30.125814   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 09:23:30.128446   69327 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 09:23:30.128462   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.295112   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:30.453326   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.618122   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:30.627879   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.795647   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:30.859013   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:30.952706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.118444   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:31.128297   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.296395   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:31.453602   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.618301   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:31.628316   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.796395   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:31.953303   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.117878   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:32.129161   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:32.453394   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.617935   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:32.628968   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.795758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:32.952515   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.118256   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:33.128339   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.296358   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:33.358915   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:33.453498   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.618179   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:33.628236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.795966   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:33.952901   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.118345   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:34.128409   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.296117   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:34.453159   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.618627   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:34.628811   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.795351   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:34.953312   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.117934   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:35.129034   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.295978   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:35.453581   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.618323   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:35.628512   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.795413   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:35.858735   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:35.953312   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.118892   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:36.129033   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.296053   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:36.453190   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.619336   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:36.628417   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.796235   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:36.953128   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.119172   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:37.128120   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.296270   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:37.453637   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.618777   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:37.628236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.796306   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:37.858834   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:37.953436   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.118329   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:38.128296   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.296055   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:38.453402   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.618211   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:38.628430   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.796354   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:38.953715   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.118413   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:39.128585   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.295669   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:39.452895   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.618480   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:39.628482   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.794948   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:39.952640   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.118753   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:40.128646   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.295260   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:40.358856   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:40.453781   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.618426   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:40.628349   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.795957   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:40.952765   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.118467   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:41.128553   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.295757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:41.452803   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.618629   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:41.628822   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.795606   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:41.952820   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.118625   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:42.128377   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.296125   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:42.453368   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.618122   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:42.628289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.795939   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:42.858461   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:42.953176   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.118810   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:43.128929   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.296252   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:43.453132   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.618703   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:43.628804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.795327   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:43.953144   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.118801   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:44.129097   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.295929   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:44.453774   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.618398   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:44.628685   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.795334   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:44.858681   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:44.952906   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.118881   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:45.128774   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:45.453214   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.617695   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:45.628789   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.795429   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:45.952769   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.118705   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:46.128830   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.295423   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:46.452702   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.618349   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:46.628421   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.796159   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:46.858796   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:46.953110   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.119126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:47.128173   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.296601   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:47.452979   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.618855   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:47.628877   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.795432   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:47.953503   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.118253   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:48.128322   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.296189   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:48.453509   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.618151   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:48.628276   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.796391   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:48.859172   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:48.952654   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.118554   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:49.128744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.295508   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:49.452589   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.618216   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:49.628123   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.795726   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:49.952297   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.117833   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:50.128678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.295348   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:50.452767   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.618572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:50.628713   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.795424   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:50.952931   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.118873   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:51.128955   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.295850   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:51.358303   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:51.452919   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.618592   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:51.628725   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.795482   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:51.952801   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.118644   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:52.128708   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.295506   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:52.453192   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.619213   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:52.628293   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.795972   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:52.953005   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.118840   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:53.128927   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.295684   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:53.359222   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:53.452693   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.618040   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:53.627992   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.795487   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:53.953793   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.118575   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:54.128463   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.295126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:54.453611   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.618102   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:54.627886   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.795712   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:54.952424   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.117826   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:55.128953   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.295579   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:55.454174   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.617860   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:55.629076   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.795706   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:55.858995   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:55.952336   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.118051   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:56.128063   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:56.452974   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.618434   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:56.628629   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.795383   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:56.953351   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.117981   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:57.128065   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.295828   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:57.453338   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.617922   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:57.627934   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.795696   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:57.859412   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:57.952730   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.118397   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:58.128450   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.295020   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:58.453564   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.618222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:58.628193   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.795989   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:58.953337   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.117775   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:59.128870   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.295678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:59.452863   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.618677   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:59.628843   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.795519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:59.953368   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.118008   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:00.128983   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.295792   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:24:00.359149   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:24:00.452681   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.618254   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:00.628120   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.795799   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:00.952513   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.118494   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:01.129049   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.296147   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:01.454062   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.618924   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:01.629574   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.795538   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:01.953822   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.119061   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:02.128734   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.295798   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:02.453232   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.617971   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:02.628522   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.795362   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:24:02.859268   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:24:02.952739   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.118988   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:03.129744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.295797   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:03.463416   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.617531   69327 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 09:24:03.617557   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:03.630608   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.796622   69327 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 09:24:03.796653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:03.860162   69327 node_ready.go:49] node "addons-768607" is "Ready"
	I1123 09:24:03.860204   69327 node_ready.go:38] duration metric: took 41.504482488s for node "addons-768607" to be "Ready" ...
	I1123 09:24:03.860224   69327 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:24:03.860304   69327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:24:03.880557   69327 api_server.go:72] duration metric: took 42.076650324s to wait for apiserver process to appear ...
	I1123 09:24:03.880589   69327 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:24:03.880622   69327 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:24:03.888208   69327 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:24:03.889956   69327 api_server.go:141] control plane version: v1.34.1
	I1123 09:24:03.890006   69327 api_server.go:131] duration metric: took 9.408531ms to wait for apiserver health ...
	I1123 09:24:03.890020   69327 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:24:03.904451   69327 system_pods.go:59] 20 kube-system pods found
	I1123 09:24:03.904502   69327 system_pods.go:61] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:03.904511   69327 system_pods.go:61] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:03.904522   69327 system_pods.go:61] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:03.904531   69327 system_pods.go:61] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:03.904540   69327 system_pods.go:61] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:03.904548   69327 system_pods.go:61] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:03.904555   69327 system_pods.go:61] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:03.904559   69327 system_pods.go:61] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:03.904564   69327 system_pods.go:61] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:03.904574   69327 system_pods.go:61] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:03.904579   69327 system_pods.go:61] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:03.904584   69327 system_pods.go:61] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:03.904592   69327 system_pods.go:61] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:03.904600   69327 system_pods.go:61] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:03.904608   69327 system_pods.go:61] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:03.904616   69327 system_pods.go:61] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:03.904624   69327 system_pods.go:61] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:03.904635   69327 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.904645   69327 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.904651   69327 system_pods.go:61] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:03.904710   69327 system_pods.go:74] duration metric: took 14.679775ms to wait for pod list to return data ...
	I1123 09:24:03.904721   69327 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:24:03.908978   69327 default_sa.go:45] found service account: "default"
	I1123 09:24:03.909020   69327 default_sa.go:55] duration metric: took 4.285307ms for default service account to be created ...
	I1123 09:24:03.909033   69327 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:24:03.998333   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.999786   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:03.999821   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:03.999828   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:03.999836   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:03.999841   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:03.999848   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:03.999854   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:03.999858   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:03.999862   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:03.999865   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:03.999870   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:03.999874   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:03.999877   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:03.999882   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:03.999890   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:03.999895   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:03.999902   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:03.999907   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:03.999916   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.999922   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.999927   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:03.999945   69327 retry.go:31] will retry after 225.422061ms: missing components: kube-dns
	I1123 09:24:04.119442   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:04.129332   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.231552   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:04.231595   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:04.231606   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:04.231617   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:04.231626   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:04.231640   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:04.231646   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:04.231654   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:04.231660   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:04.231668   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:04.231676   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:04.231682   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:04.231689   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:04.231698   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:04.231706   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:04.231717   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:04.231725   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:04.231734   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:04.231742   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.231753   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.231772   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:04.231796   69327 retry.go:31] will retry after 386.727357ms: missing components: kube-dns
	I1123 09:24:04.313740   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:04.454378   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:04.618533   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:04.622852   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:04.622891   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:04.622901   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Running
	I1123 09:24:04.622912   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:04.622920   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:04.622929   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:04.622935   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:04.622944   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:04.622950   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:04.622955   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:04.622968   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:04.622976   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:04.622982   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:04.622995   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:04.623004   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:04.623019   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:04.623026   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:04.623041   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:04.623051   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.623062   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.623067   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Running
	I1123 09:24:04.623080   69327 system_pods.go:126] duration metric: took 714.038787ms to wait for k8s-apps to be running ...
	I1123 09:24:04.623105   69327 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:24:04.623167   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:24:04.629752   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.717911   69327 system_svc.go:56] duration metric: took 94.792733ms WaitForService to wait for kubelet
	I1123 09:24:04.717955   69327 kubeadm.go:587] duration metric: took 42.914053592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:24:04.717994   69327 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:24:04.721411   69327 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:24:04.721444   69327 node_conditions.go:123] node cpu capacity is 8
	I1123 09:24:04.721465   69327 node_conditions.go:105] duration metric: took 3.464303ms to run NodePressure ...
	I1123 09:24:04.721481   69327 start.go:242] waiting for startup goroutines ...
	I1123 09:24:04.797466   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:04.954009   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.119188   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:05.128758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.296384   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:05.453775   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.618820   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:05.629548   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.795649   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:05.953437   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.118887   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:06.129252   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.296572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:06.453947   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.619403   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:06.629675   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.795780   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:06.953858   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.119054   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:07.129145   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.296657   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:07.453780   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.619026   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:07.630796   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.795920   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:07.953613   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.118808   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:08.129078   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.296132   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:08.452865   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.619180   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:08.628619   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.795500   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:08.953223   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.119370   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:09.129275   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.297029   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:09.454070   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.619414   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:09.629180   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.796067   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:09.953079   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.119536   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:10.128893   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.296112   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:10.453767   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.618809   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:10.629402   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.796584   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:10.953582   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.118756   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:11.129274   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.297289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:11.453484   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.619041   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:11.719653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.795280   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:11.953112   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.118281   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:12.128303   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.296198   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:12.453209   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.619123   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:12.628566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.795333   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:12.953250   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.118314   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:13.129238   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.296944   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:13.454310   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.618236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:13.628977   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.796598   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:13.953352   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.135577   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.135591   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:14.296155   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:14.452832   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.618812   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:14.720044   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.795841   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:14.953446   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.118126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:15.128062   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.295985   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:15.454177   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.619221   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:15.628377   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.796289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:15.952963   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.119444   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:16.129172   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.296062   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:16.453157   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.619217   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:16.629530   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.796049   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:16.953145   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.119287   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:17.128566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.295770   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:17.454037   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.618860   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:17.628761   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.796435   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:17.952557   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:18.118750   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:18.128804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:18.295731   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:18.453725   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:18.620295   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:18.629972   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:18.797276   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:18.955863   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.121294   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:19.129787   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:19.311368   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:19.454126   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.693481   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:19.693699   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:19.796110   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:19.954239   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.119603   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:20.129138   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:20.296678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:20.453732   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.618915   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:20.629405   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:20.796479   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:20.953272   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.119548   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:21.129059   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:21.296511   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:21.453342   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.618520   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:21.629514   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:21.795946   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:21.952761   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.123723   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:22.129070   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:22.296795   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:22.454127   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.619495   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:22.628750   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:22.795591   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:22.953149   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.119705   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:23.129439   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:23.296928   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:23.454059   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.618973   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:23.630366   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:23.797528   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:23.953119   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.119163   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:24.128221   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:24.297079   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:24.453837   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.620309   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:24.629256   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:24.796850   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:24.953912   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:25.118714   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:25.128825   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:25.296393   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:25.453735   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:25.619316   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:25.628756   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:25.795711   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:25.953272   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:26.118806   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:26.128832   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:26.295983   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:26.453561   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:26.618613   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:26.628680   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:26.795570   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:26.953354   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:27.118220   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:27.131727   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:27.295676   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:27.453469   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:27.618421   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:27.628787   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:27.795668   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.011480   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:28.118242   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:28.129121   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:28.296519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.453278   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:28.619999   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:28.628862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:28.797151   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.952981   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:29.118815   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:29.129802   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:29.296568   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:29.453571   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:29.618502   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:29.629403   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:29.796464   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:29.952881   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:30.119061   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:30.128572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:30.296201   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:30.474055   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:30.619175   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:30.628420   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:30.796744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:30.953536   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:31.118216   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:31.127986   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:31.295916   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:31.453471   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:31.617962   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:31.628145   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:31.796361   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:31.953451   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:32.118757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:32.129314   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:32.296888   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:32.453349   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:32.618723   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:32.628810   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:32.795971   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:32.953676   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:33.118318   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:33.128519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:33.295612   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:33.452686   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:33.619001   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:33.629200   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:33.796044   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:33.952179   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:34.118945   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:34.127825   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:34.296222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:34.454299   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:34.618186   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:34.629597   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:34.795694   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:34.953182   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:35.119191   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:35.128684   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:35.295579   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:35.453315   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:35.619350   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:35.628936   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:35.796677   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:35.953338   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:36.117910   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:36.129329   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:36.337737   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:36.475706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:36.618663   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:36.628813   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:36.796490   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:36.953082   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:37.119422   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:37.129323   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:37.295927   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:37.454706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:37.618699   69327 kapi.go:107] duration metric: took 1m14.503570302s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 09:24:37.628709   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:37.795851   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:37.953676   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:38.129222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:38.297055   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:38.454281   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:38.629646   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:38.796550   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:38.953236   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:39.128655   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:39.295468   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:39.453186   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:39.629125   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:39.796560   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:39.953510   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:40.129175   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:40.296727   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:40.453695   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:40.631711   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:40.796960   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:40.953940   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:41.129653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:41.296242   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:41.454346   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:41.629415   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:41.796208   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:41.953001   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:42.129353   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:42.297225   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:42.452987   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:42.683083   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:42.796491   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:42.955412   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:43.129485   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:43.296405   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:43.453548   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:43.629530   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:43.795744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:43.952923   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:44.128854   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:44.295919   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:44.453754   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:44.629758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:44.795702   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:44.952926   69327 kapi.go:107] duration metric: took 1m21.502975936s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 09:24:45.129391   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:45.296341   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:45.629718   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:45.796483   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:46.129023   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:46.296857   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:46.628566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:46.795532   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:47.128798   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:47.295681   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:47.629496   69327 kapi.go:107] duration metric: took 1m17.503683612s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 09:24:47.631287   69327 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-768607 cluster.
	I1123 09:24:47.632778   69327 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 09:24:47.634151   69327 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 09:24:47.795784   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:48.296438   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:48.796199   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:49.296707   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:49.796484   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:50.296563   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:50.796232   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:51.295999   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:51.795811   69327 kapi.go:107] duration metric: took 1m28.003326818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 09:24:51.797487   69327 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, ingress-dns, default-storageclass, yakd, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1123 09:24:51.798563   69327 addons.go:530] duration metric: took 1m29.994638779s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget ingress-dns default-storageclass yakd metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1123 09:24:51.798607   69327 start.go:247] waiting for cluster config update ...
	I1123 09:24:51.798634   69327 start.go:256] writing updated cluster config ...
	I1123 09:24:51.798933   69327 ssh_runner.go:195] Run: rm -f paused
	I1123 09:24:51.802842   69327 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:51.805813   69327 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qvd9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.809680   69327 pod_ready.go:94] pod "coredns-66bc5c9577-qvd9b" is "Ready"
	I1123 09:24:51.809702   69327 pod_ready.go:86] duration metric: took 3.869185ms for pod "coredns-66bc5c9577-qvd9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.811367   69327 pod_ready.go:83] waiting for pod "etcd-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.814970   69327 pod_ready.go:94] pod "etcd-addons-768607" is "Ready"
	I1123 09:24:51.814989   69327 pod_ready.go:86] duration metric: took 3.602809ms for pod "etcd-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.816856   69327 pod_ready.go:83] waiting for pod "kube-apiserver-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.820245   69327 pod_ready.go:94] pod "kube-apiserver-addons-768607" is "Ready"
	I1123 09:24:51.820275   69327 pod_ready.go:86] duration metric: took 3.397031ms for pod "kube-apiserver-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.821898   69327 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.206905   69327 pod_ready.go:94] pod "kube-controller-manager-addons-768607" is "Ready"
	I1123 09:24:52.206940   69327 pod_ready.go:86] duration metric: took 385.022394ms for pod "kube-controller-manager-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.459008   69327 pod_ready.go:83] waiting for pod "kube-proxy-szpms" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.820223   69327 pod_ready.go:94] pod "kube-proxy-szpms" is "Ready"
	I1123 09:24:52.820255   69327 pod_ready.go:86] duration metric: took 361.216669ms for pod "kube-proxy-szpms" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.007511   69327 pod_ready.go:83] waiting for pod "kube-scheduler-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.406358   69327 pod_ready.go:94] pod "kube-scheduler-addons-768607" is "Ready"
	I1123 09:24:53.406393   69327 pod_ready.go:86] duration metric: took 398.854316ms for pod "kube-scheduler-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.406411   69327 pod_ready.go:40] duration metric: took 1.603537207s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:53.449150   69327 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:24:53.451401   69327 out.go:179] * Done! kubectl is now configured to use "addons-768607" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:26:17 addons-768607 crio[772]: time="2025-11-23T09:26:17.196288039Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=f0682888-069d-47c0-a404-7f888d368edd name=/runtime.v1.ImageService/PullImage
	Nov 23 09:26:17 addons-768607 crio[772]: time="2025-11-23T09:26:17.198689297Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.871172605Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=f0682888-069d-47c0-a404-7f888d368edd name=/runtime.v1.ImageService/PullImage
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.871762429Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=fe5c9457-958e-4a1d-a199-bd8ec0bbd92c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.90449537Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=2b8c0e99-f034-44e2-aa47-0d21e3d5d5bf name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.908260953Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-pf8cs/registry-creds" id=941ed9db-68d8-4f14-b46c-c7e569ee654f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.908391388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.914963117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.915586704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.94441273Z" level=info msg="Created container 45a50d00e5da911efa13be7d29d806b6f2d062f2f7fe9a53283ca2f838d68c74: kube-system/registry-creds-764b6fb674-pf8cs/registry-creds" id=941ed9db-68d8-4f14-b46c-c7e569ee654f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.944962354Z" level=info msg="Starting container: 45a50d00e5da911efa13be7d29d806b6f2d062f2f7fe9a53283ca2f838d68c74" id=c89ffac1-cfe9-42ee-bf64-b7b29704d5a2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:26:18 addons-768607 crio[772]: time="2025-11-23T09:26:18.946765536Z" level=info msg="Started container" PID=8902 containerID=45a50d00e5da911efa13be7d29d806b6f2d062f2f7fe9a53283ca2f838d68c74 description=kube-system/registry-creds-764b6fb674-pf8cs/registry-creds id=c89ffac1-cfe9-42ee-bf64-b7b29704d5a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2fbe1f0bb733256b6c9e74dea87005f496f7a9f8ead0572224078025dc0f0c7e
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.944831561Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kn8km/POD" id=e5a2defd-226a-4a55-9a26-a21bf5e1819a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.94491966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.952558556Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kn8km Namespace:default ID:78a964ccc1787197749af7add75c15b32533e84035d28c2cc6cd9f0ea15cbc62 UID:997d3e07-1402-49ff-bc80-31500c96125b NetNS:/var/run/netns/0934b45a-0ec8-46e8-8692-99bca12bdfbc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00062d7f0}] Aliases:map[]}"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.952593143Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kn8km to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.963337568Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kn8km Namespace:default ID:78a964ccc1787197749af7add75c15b32533e84035d28c2cc6cd9f0ea15cbc62 UID:997d3e07-1402-49ff-bc80-31500c96125b NetNS:/var/run/netns/0934b45a-0ec8-46e8-8692-99bca12bdfbc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00062d7f0}] Aliases:map[]}"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.963500748Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kn8km for CNI network kindnet (type=ptp)"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.96465498Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.96585094Z" level=info msg="Ran pod sandbox 78a964ccc1787197749af7add75c15b32533e84035d28c2cc6cd9f0ea15cbc62 with infra container: default/hello-world-app-5d498dc89-kn8km/POD" id=e5a2defd-226a-4a55-9a26-a21bf5e1819a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.967261594Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e00f9434-6e6c-41aa-a444-5e774e78a81e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.9673974Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e00f9434-6e6c-41aa-a444-5e774e78a81e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.96745095Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e00f9434-6e6c-41aa-a444-5e774e78a81e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.968157243Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=2120c2bb-c3c1-494f-ab88-0706134a59ab name=/runtime.v1.ImageService/PullImage
	Nov 23 09:27:35 addons-768607 crio[772]: time="2025-11-23T09:27:35.97304528Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	45a50d00e5da9       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   2fbe1f0bb7332       registry-creds-764b6fb674-pf8cs            kube-system
	b2c180df98f6b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   b8899ff329764       nginx                                      default
	55be9b144d3e0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   c8a3b2e53da2b       busybox                                    default
	25a90399c1823       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	30475367013dc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	c692f2c4458f0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	231168bdacbd0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	1e22dfb32cfee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   c38c00beffd66       gcp-auth-78565c9fb4-2pvgc                  gcp-auth
	c919189c246c8       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago        Running             controller                               0                   f2cf3bb76ce7e       ingress-nginx-controller-6c8bf45fb-bpzqp   ingress-nginx
	5fb1a2f531ff0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago        Running             gadget                                   0                   f11a9f639780d       gadget-hp58l                               gadget
	1364a68c663de       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	57e021fa16b34       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   ca1eb83b1b2dd       registry-proxy-hvxjj                       kube-system
	3be653d3906b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   432d04f46b6c6       snapshot-controller-7d9fbc56b8-rdc2h       kube-system
	ae93f08af7cde       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   a527c1165fb87       amd-gpu-device-plugin-8vlwk                kube-system
	58c6caa5d7a2a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b141c1fe9e009       snapshot-controller-7d9fbc56b8-4qkfp       kube-system
	021ee69331dd2       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   70c6e19296627       nvidia-device-plugin-daemonset-b9prj       kube-system
	59c5e7c66e383       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	f4fec87683212       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   330ecedceef63       csi-hostpath-resizer-0                     kube-system
	6039167b575c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              patch                                    0                   a92d85b711d57       ingress-nginx-admission-patch-6r4gd        ingress-nginx
	530d43c188680       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   f39efe8cafe25       ingress-nginx-admission-create-gxxrb       ingress-nginx
	00cf685e4f763       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   d57b49805add0       kube-ingress-dns-minikube                  kube-system
	6e05171fad5d4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   0be8d43ad1430       csi-hostpath-attacher-0                    kube-system
	256c13e134ad7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   fe1fb36c19632       local-path-provisioner-648f6765c9-txfsp    local-path-storage
	d693f68cf264b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   6be84db5a7ce8       yakd-dashboard-5ff678cb9-288kh             yakd-dashboard
	034d682ec778c       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   92541d65f90ee       cloud-spanner-emulator-5bdddb765-qn9ss     default
	da035bc9e46eb       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   0e616adcb24df       registry-6b586f9694-wb6sr                  kube-system
	c21acab334cad       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   da0edeb18a21b       metrics-server-85b7d694d7-gzdxp            kube-system
	8f3fdc51b52f6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   cc03772847c09       coredns-66bc5c9577-qvd9b                   kube-system
	01d6b9bf1de88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   3a06a365102cd       storage-provisioner                        kube-system
	403102191b13c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   d32bf043aaa4c       kube-proxy-szpms                           kube-system
	d98e916f22715       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   79d8d5732cfb1       kindnet-tw8jx                              kube-system
	628b56a1e0e47       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   330aaa6f639e9       etcd-addons-768607                         kube-system
	93dfa5558a7a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   a873d54ad8caa       kube-controller-manager-addons-768607      kube-system
	b5f64ab3094a6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   268a5e52d91de       kube-apiserver-addons-768607               kube-system
	d8909c0c21553       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   3c141f6eb777e       kube-scheduler-addons-768607               kube-system
	
	
	==> coredns [8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4] <==
	[INFO] 10.244.0.22:50023 - 55261 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000142648s
	[INFO] 10.244.0.22:58156 - 46325 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006493188s
	[INFO] 10.244.0.22:57985 - 17948 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006975937s
	[INFO] 10.244.0.22:40092 - 21269 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004203429s
	[INFO] 10.244.0.22:53637 - 37524 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006615399s
	[INFO] 10.244.0.22:43184 - 55676 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00326154s
	[INFO] 10.244.0.22:58206 - 40599 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005204673s
	[INFO] 10.244.0.22:38273 - 29143 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000792185s
	[INFO] 10.244.0.22:54052 - 61089 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001085307s
	[INFO] 10.244.0.28:54752 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000245421s
	[INFO] 10.244.0.28:57492 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000215461s
	[INFO] 10.244.0.31:42809 - 42751 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00023878s
	[INFO] 10.244.0.31:51201 - 40 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000317403s
	[INFO] 10.244.0.31:56304 - 23201 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000117597s
	[INFO] 10.244.0.31:44465 - 52258 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000145769s
	[INFO] 10.244.0.31:51753 - 9139 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000111068s
	[INFO] 10.244.0.31:54836 - 38025 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000146104s
	[INFO] 10.244.0.31:49989 - 27579 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00524379s
	[INFO] 10.244.0.31:56281 - 9637 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005381345s
	[INFO] 10.244.0.31:52483 - 37136 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005742132s
	[INFO] 10.244.0.31:57377 - 40629 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006240543s
	[INFO] 10.244.0.31:51000 - 30933 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004628646s
	[INFO] 10.244.0.31:50497 - 57739 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004699842s
	[INFO] 10.244.0.31:45035 - 15946 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002004402s
	[INFO] 10.244.0.31:49716 - 12927 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002158169s
	
	
	==> describe nodes <==
	Name:               addons-768607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-768607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=addons-768607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_23_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-768607
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-768607"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:23:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-768607
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:27:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:26:50 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:26:50 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:26:50 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:26:50 +0000   Sun, 23 Nov 2025 09:24:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-768607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a0f70bd2-ce4d-4b3b-948d-0689086be8f1
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     cloud-spanner-emulator-5bdddb765-qn9ss      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  default                     hello-world-app-5d498dc89-kn8km             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-hp58l                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  gcp-auth                    gcp-auth-78565c9fb4-2pvgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-bpzqp    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m14s
	  kube-system                 amd-gpu-device-plugin-8vlwk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-66bc5c9577-qvd9b                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m15s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 csi-hostpathplugin-9ksmc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-addons-768607                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m21s
	  kube-system                 kindnet-tw8jx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m16s
	  kube-system                 kube-apiserver-addons-768607                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-addons-768607       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-szpms                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-addons-768607                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 metrics-server-85b7d694d7-gzdxp             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m14s
	  kube-system                 nvidia-device-plugin-daemonset-b9prj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 registry-6b586f9694-wb6sr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-creds-764b6fb674-pf8cs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-proxy-hvxjj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-4qkfp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-rdc2h        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  local-path-storage          local-path-provisioner-648f6765c9-txfsp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-288kh              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node addons-768607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node addons-768607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x8 over 4m26s)  kubelet          Node addons-768607 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node addons-768607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node addons-768607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node addons-768607 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s                  node-controller  Node addons-768607 event: Registered Node addons-768607 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node addons-768607 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.078010] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021497] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276866] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:25] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.037608] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023966] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +2.048049] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	
	
	==> etcd [628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3] <==
	{"level":"warn","ts":"2025-11-23T09:23:13.270913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.276868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.283267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.289059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.295793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.302280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.308778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.316207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.323241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.329097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.335664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.342397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.348518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.354522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.360509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.367893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.387501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.393490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.399107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.439837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:24.215747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:24.222954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.837480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.851573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.857674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [1e22dfb32cfee7e5c5ffb22b93c4741fa7b2e19a1b9343ca2f2b65b92d580467] <==
	2025/11/23 09:24:47 GCP Auth Webhook started!
	2025/11/23 09:24:53 Ready to marshal response ...
	2025/11/23 09:24:53 Ready to write response ...
	2025/11/23 09:24:53 Ready to marshal response ...
	2025/11/23 09:24:53 Ready to write response ...
	2025/11/23 09:24:54 Ready to marshal response ...
	2025/11/23 09:24:54 Ready to write response ...
	2025/11/23 09:25:02 Ready to marshal response ...
	2025/11/23 09:25:02 Ready to write response ...
	2025/11/23 09:25:02 Ready to marshal response ...
	2025/11/23 09:25:02 Ready to write response ...
	2025/11/23 09:25:10 Ready to marshal response ...
	2025/11/23 09:25:11 Ready to write response ...
	2025/11/23 09:25:12 Ready to marshal response ...
	2025/11/23 09:25:12 Ready to write response ...
	2025/11/23 09:25:12 Ready to marshal response ...
	2025/11/23 09:25:12 Ready to write response ...
	2025/11/23 09:25:31 Ready to marshal response ...
	2025/11/23 09:25:31 Ready to write response ...
	2025/11/23 09:26:01 Ready to marshal response ...
	2025/11/23 09:26:01 Ready to write response ...
	2025/11/23 09:27:35 Ready to marshal response ...
	2025/11/23 09:27:35 Ready to write response ...
	
	
	==> kernel <==
	 09:27:37 up  2:09,  0 user,  load average: 0.30, 0.94, 1.63
	Linux addons-768607 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d] <==
	I1123 09:25:33.158887       1 main.go:301] handling current node
	I1123 09:25:43.158870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:25:43.158902       1 main.go:301] handling current node
	I1123 09:25:53.158433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:25:53.158462       1 main.go:301] handling current node
	I1123 09:26:03.158510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:03.158548       1 main.go:301] handling current node
	I1123 09:26:13.158774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:13.158812       1 main.go:301] handling current node
	I1123 09:26:23.159162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:23.159201       1 main.go:301] handling current node
	I1123 09:26:33.159160       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:33.159204       1 main.go:301] handling current node
	I1123 09:26:43.158522       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:43.158551       1 main.go:301] handling current node
	I1123 09:26:53.159160       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:26:53.159188       1 main.go:301] handling current node
	I1123 09:27:03.159049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:27:03.159080       1 main.go:301] handling current node
	I1123 09:27:13.166858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:27:13.166891       1 main.go:301] handling current node
	I1123 09:27:23.158969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:27:23.159008       1 main.go:301] handling current node
	I1123 09:27:33.163912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:27:33.163942       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99] <==
	E1123 09:24:06.346694       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	E1123 09:24:06.352565       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	E1123 09:24:06.374059       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	W1123 09:24:07.348219       1 handler_proxy.go:99] no RequestInfo found in the context
	W1123 09:24:07.348258       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 09:24:07.348259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 09:24:07.348296       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1123 09:24:07.348340       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 09:24:07.349494       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 09:24:11.421193       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 09:24:11.421250       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 09:24:11.421327       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 09:24:11.430041       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 09:25:02.098590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60996: use of closed network connection
	E1123 09:25:02.258467       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32786: use of closed network connection
	I1123 09:25:10.837303       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 09:25:11.073320       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.5.167"}
	I1123 09:25:41.151811       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 09:27:35.709552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.127.79"}
	
	
	==> kube-controller-manager [93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092] <==
	I1123 09:23:20.814836       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-768607"
	I1123 09:23:20.814887       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:23:20.814885       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:23:20.815319       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:23:20.815328       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:23:20.816155       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:23:20.816608       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:23:20.816699       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:23:20.816950       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:23:20.816987       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:23:20.817036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:23:20.817451       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:23:20.817829       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:23:20.820359       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:23:20.820401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:23:20.835946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 09:23:23.170039       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 09:23:50.825594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 09:23:50.825725       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 09:23:50.825783       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 09:23:50.842619       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 09:23:50.846227       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 09:23:50.926158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:23:50.947377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:24:05.818916       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be] <==
	I1123 09:23:22.943851       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:23:23.083881       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:23:23.184189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:23:23.184230       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:23:23.184336       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:23:23.277700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:23:23.277776       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:23:23.289023       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:23:23.294685       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:23:23.294714       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:23:23.298124       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:23:23.298130       1 config.go:200] "Starting service config controller"
	I1123 09:23:23.298155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:23:23.298158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:23:23.298309       1 config.go:309] "Starting node config controller"
	I1123 09:23:23.298349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:23:23.298376       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:23:23.298585       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:23:23.298664       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:23:23.399165       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:23:23.399204       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:23:23.400515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14] <==
	E1123 09:23:13.839362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:23:13.839369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:23:13.839482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:23:13.839483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:23:13.839778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:23:13.839778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:23:13.839856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:23:13.839860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:23:13.839880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:23:13.839896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:23:13.839958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:23:13.840007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:23:13.840009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:23:13.840146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:23:13.840161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:23:13.840172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:23:13.840202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:23:14.729040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:23:14.729040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:23:14.808981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:23:14.816984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:23:14.913853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:23:14.972136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:23:15.019261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1123 09:23:17.938035       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:26:01 addons-768607 kubelet[1289]: I1123 09:26:01.482048    1289 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-0a1dca51-1f6b-42dc-b472-ba88c1940d85\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e\") pod \"task-pv-pod-restore\" (UID: \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/e76122845c7260d98780b41b96fec043da65b3819b244abf0df91055d660b27d/globalmount\"" pod="default/task-pv-pod-restore"
	Nov 23 09:26:06 addons-768607 kubelet[1289]: E1123 09:26:06.471232    1289 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-pf8cs" podUID="b2b57794-0e2a-4a54-b1c1-086e0cf60915"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.263980    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=8.006915334 podStartE2EDuration="9.263956163s" podCreationTimestamp="2025-11-23 09:26:01 +0000 UTC" firstStartedPulling="2025-11-23 09:26:01.531042875 +0000 UTC m=+165.438548873" lastFinishedPulling="2025-11-23 09:26:02.78808371 +0000 UTC m=+166.695589702" observedRunningTime="2025-11-23 09:26:03.84687264 +0000 UTC m=+167.754378650" watchObservedRunningTime="2025-11-23 09:26:10.263956163 +0000 UTC m=+174.171462172"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.551935    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d5bf257b-db65-42a1-ba67-8eccba9ebe74-gcp-creds\") pod \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\" (UID: \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\") "
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.551998    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpth\" (UniqueName: \"kubernetes.io/projected/d5bf257b-db65-42a1-ba67-8eccba9ebe74-kube-api-access-xwpth\") pod \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\" (UID: \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\") "
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.552034    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5bf257b-db65-42a1-ba67-8eccba9ebe74-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d5bf257b-db65-42a1-ba67-8eccba9ebe74" (UID: "d5bf257b-db65-42a1-ba67-8eccba9ebe74"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.552125    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e\") pod \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\" (UID: \"d5bf257b-db65-42a1-ba67-8eccba9ebe74\") "
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.552306    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d5bf257b-db65-42a1-ba67-8eccba9ebe74-gcp-creds\") on node \"addons-768607\" DevicePath \"\""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.554374    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bf257b-db65-42a1-ba67-8eccba9ebe74-kube-api-access-xwpth" (OuterVolumeSpecName: "kube-api-access-xwpth") pod "d5bf257b-db65-42a1-ba67-8eccba9ebe74" (UID: "d5bf257b-db65-42a1-ba67-8eccba9ebe74"). InnerVolumeSpecName "kube-api-access-xwpth". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.555163    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e" (OuterVolumeSpecName: "task-pv-storage") pod "d5bf257b-db65-42a1-ba67-8eccba9ebe74" (UID: "d5bf257b-db65-42a1-ba67-8eccba9ebe74"). InnerVolumeSpecName "pvc-0a1dca51-1f6b-42dc-b472-ba88c1940d85". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.653021    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwpth\" (UniqueName: \"kubernetes.io/projected/d5bf257b-db65-42a1-ba67-8eccba9ebe74-kube-api-access-xwpth\") on node \"addons-768607\" DevicePath \"\""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.653076    1289 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0a1dca51-1f6b-42dc-b472-ba88c1940d85\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e\") on node \"addons-768607\" "
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.660210    1289 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0a1dca51-1f6b-42dc-b472-ba88c1940d85" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e") on node "addons-768607"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.753840    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-0a1dca51-1f6b-42dc-b472-ba88c1940d85\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6c97509c-c84e-11f0-8300-f6ea666f778e\") on node \"addons-768607\" DevicePath \"\""
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.863588    1289 scope.go:117] "RemoveContainer" containerID="13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.873793    1289 scope.go:117] "RemoveContainer" containerID="13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: E1123 09:26:10.874125    1289 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230\": container with ID starting with 13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230 not found: ID does not exist" containerID="13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230"
	Nov 23 09:26:10 addons-768607 kubelet[1289]: I1123 09:26:10.874169    1289 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230"} err="failed to get container status \"13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230\": rpc error: code = NotFound desc = could not find container \"13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230\": container with ID starting with 13a1f58946724a3562841e3a94cc99a1e123f83e218162a30562e8e7daf08230 not found: ID does not exist"
	Nov 23 09:26:12 addons-768607 kubelet[1289]: I1123 09:26:12.176043    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5bf257b-db65-42a1-ba67-8eccba9ebe74" path="/var/lib/kubelet/pods/d5bf257b-db65-42a1-ba67-8eccba9ebe74/volumes"
	Nov 23 09:26:19 addons-768607 kubelet[1289]: I1123 09:26:19.912955    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-pf8cs" podStartSLOduration=176.236189559 podStartE2EDuration="2m57.91293183s" podCreationTimestamp="2025-11-23 09:23:22 +0000 UTC" firstStartedPulling="2025-11-23 09:26:17.195957276 +0000 UTC m=+181.103463265" lastFinishedPulling="2025-11-23 09:26:18.872699534 +0000 UTC m=+182.780205536" observedRunningTime="2025-11-23 09:26:19.912650091 +0000 UTC m=+183.820156101" watchObservedRunningTime="2025-11-23 09:26:19.91293183 +0000 UTC m=+183.820437840"
	Nov 23 09:26:56 addons-768607 kubelet[1289]: I1123 09:26:56.174404    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hvxjj" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:27:09 addons-768607 kubelet[1289]: I1123 09:27:09.173484    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8vlwk" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:27:15 addons-768607 kubelet[1289]: I1123 09:27:15.172398    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b9prj" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:27:35 addons-768607 kubelet[1289]: I1123 09:27:35.758855    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/997d3e07-1402-49ff-bc80-31500c96125b-gcp-creds\") pod \"hello-world-app-5d498dc89-kn8km\" (UID: \"997d3e07-1402-49ff-bc80-31500c96125b\") " pod="default/hello-world-app-5d498dc89-kn8km"
	Nov 23 09:27:35 addons-768607 kubelet[1289]: I1123 09:27:35.758908    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgp8\" (UniqueName: \"kubernetes.io/projected/997d3e07-1402-49ff-bc80-31500c96125b-kube-api-access-rsgp8\") pod \"hello-world-app-5d498dc89-kn8km\" (UID: \"997d3e07-1402-49ff-bc80-31500c96125b\") " pod="default/hello-world-app-5d498dc89-kn8km"
	
	
	==> storage-provisioner [01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36] <==
	W1123 09:27:12.841881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:14.845410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:14.850121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:16.853126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:16.856765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:18.860241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:18.863734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:20.866573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:20.870931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:22.873591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:22.877978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:24.880970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:24.884893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:26.887741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:26.891684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:28.895363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:28.898955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:30.901876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:30.905508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:32.908461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:32.912158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:34.915255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:34.919374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:36.923378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:36.927792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-768607 -n addons-768607
helpers_test.go:269: (dbg) Run:  kubectl --context addons-768607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-kn8km ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-768607 describe pod hello-world-app-5d498dc89-kn8km ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-768607 describe pod hello-world-app-5d498dc89-kn8km ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd: exit status 1 (64.465934ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-kn8km
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-768607/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:27:35 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rsgp8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rsgp8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-kn8km to addons-768607
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.376s (1.376s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gxxrb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6r4gd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-768607 describe pod hello-world-app-5d498dc89-kn8km ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (248.504727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:27:38.065176   84021 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:27:38.065418   84021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:27:38.065426   84021 out.go:374] Setting ErrFile to fd 2...
	I1123 09:27:38.065430   84021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:27:38.065595   84021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:27:38.065885   84021 mustload.go:66] Loading cluster: addons-768607
	I1123 09:27:38.066265   84021 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:27:38.066283   84021 addons.go:622] checking whether the cluster is paused
	I1123 09:27:38.066369   84021 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:27:38.066382   84021 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:27:38.066750   84021 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:27:38.083343   84021 ssh_runner.go:195] Run: systemctl --version
	I1123 09:27:38.083387   84021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:27:38.100715   84021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:27:38.200240   84021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:27:38.200353   84021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:27:38.233668   84021 cri.go:89] found id: "45a50d00e5da911efa13be7d29d806b6f2d062f2f7fe9a53283ca2f838d68c74"
	I1123 09:27:38.233688   84021 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:27:38.233694   84021 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:27:38.233698   84021 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:27:38.233701   84021 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:27:38.233705   84021 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:27:38.233708   84021 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:27:38.233711   84021 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:27:38.233713   84021 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:27:38.233724   84021 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:27:38.233728   84021 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:27:38.233731   84021 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:27:38.233733   84021 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:27:38.233736   84021 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:27:38.233739   84021 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:27:38.233743   84021 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:27:38.233746   84021 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:27:38.233750   84021 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:27:38.233753   84021 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:27:38.233755   84021 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:27:38.233758   84021 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:27:38.233761   84021 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:27:38.233764   84021 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:27:38.233767   84021 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:27:38.233770   84021 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:27:38.233773   84021 cri.go:89] found id: ""
	I1123 09:27:38.233809   84021 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:27:38.247823   84021 out.go:203] 
	W1123 09:27:38.248964   84021 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:27:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:27:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:27:38.248992   84021 out.go:285] * 
	* 
	W1123 09:27:38.253334   84021 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:27:38.254547   84021 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.122708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:27:38.314513   84087 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:27:38.314741   84087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:27:38.314749   84087 out.go:374] Setting ErrFile to fd 2...
	I1123 09:27:38.314753   84087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:27:38.314960   84087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:27:38.315230   84087 mustload.go:66] Loading cluster: addons-768607
	I1123 09:27:38.315568   84087 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:27:38.315583   84087 addons.go:622] checking whether the cluster is paused
	I1123 09:27:38.315659   84087 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:27:38.315672   84087 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:27:38.316029   84087 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:27:38.333442   84087 ssh_runner.go:195] Run: systemctl --version
	I1123 09:27:38.333495   84087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:27:38.350306   84087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:27:38.448740   84087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:27:38.448804   84087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:27:38.477549   84087 cri.go:89] found id: "45a50d00e5da911efa13be7d29d806b6f2d062f2f7fe9a53283ca2f838d68c74"
	I1123 09:27:38.477580   84087 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:27:38.477585   84087 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:27:38.477588   84087 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:27:38.477591   84087 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:27:38.477596   84087 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:27:38.477599   84087 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:27:38.477601   84087 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:27:38.477605   84087 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:27:38.477622   84087 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:27:38.477634   84087 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:27:38.477638   84087 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:27:38.477646   84087 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:27:38.477650   84087 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:27:38.477653   84087 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:27:38.477661   84087 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:27:38.477666   84087 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:27:38.477670   84087 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:27:38.477673   84087 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:27:38.477676   84087 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:27:38.477678   84087 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:27:38.477681   84087 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:27:38.477684   84087 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:27:38.477686   84087 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:27:38.477689   84087 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:27:38.477691   84087 cri.go:89] found id: ""
	I1123 09:27:38.477741   84087 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:27:38.491371   84087 out.go:203] 
	W1123 09:27:38.492494   84087 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:27:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:27:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:27:38.492511   84087 out.go:285] * 
	* 
	W1123 09:27:38.496451   84087 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:27:38.497588   84087 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hp58l" [fa682faf-128a-4839-995c-b05669b2600e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003420403s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable inspektor-gadget --alsologtostderr -v=1
2025/11/23 09:25:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (278.481582ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:17.781591   80171 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:17.781883   80171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:17.781896   80171 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:17.781903   80171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:17.782152   80171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:17.782439   80171 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:17.782780   80171 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:17.782804   80171 addons.go:622] checking whether the cluster is paused
	I1123 09:25:17.782903   80171 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:17.782919   80171 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:17.783374   80171 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:17.803068   80171 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:17.803166   80171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:17.825833   80171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:17.930749   80171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:17.930812   80171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:17.965153   80171 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:17.965186   80171 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:17.965190   80171 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:17.965194   80171 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:17.965197   80171 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:17.965200   80171 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:17.965203   80171 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:17.965206   80171 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:17.965209   80171 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:17.965220   80171 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:17.965224   80171 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:17.965226   80171 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:17.965229   80171 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:17.965232   80171 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:17.965235   80171 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:17.965247   80171 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:17.965255   80171 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:17.965259   80171 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:17.965262   80171 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:17.965265   80171 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:17.965269   80171 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:17.965273   80171 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:17.965276   80171 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:17.965279   80171 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:17.965281   80171 cri.go:89] found id: ""
	I1123 09:25:17.965366   80171 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:17.984972   80171 out.go:203] 
	W1123 09:25:17.989270   80171 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:17.989292   80171 out.go:285] * 
	* 
	W1123 09:25:17.993620   80171 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:17.995131   80171 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.553204ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004153271s
addons_test.go:463: (dbg) Run:  kubectl --context addons-768607 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (259.156737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:10.386038   78814 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:10.386355   78814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:10.386366   78814 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:10.386371   78814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:10.386592   78814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:10.386865   78814 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:10.387210   78814 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:10.387226   78814 addons.go:622] checking whether the cluster is paused
	I1123 09:25:10.387312   78814 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:10.387324   78814 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:10.387707   78814 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:10.406145   78814 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:10.406206   78814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:10.424578   78814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:10.527141   78814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:10.527212   78814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:10.560414   78814 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:10.560442   78814 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:10.560449   78814 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:10.560454   78814 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:10.560458   78814 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:10.560463   78814 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:10.560469   78814 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:10.560473   78814 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:10.560477   78814 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:10.560486   78814 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:10.560491   78814 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:10.560496   78814 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:10.560500   78814 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:10.560505   78814 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:10.560510   78814 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:10.560535   78814 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:10.560541   78814 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:10.560546   78814 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:10.560550   78814 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:10.560554   78814 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:10.560558   78814 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:10.560562   78814 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:10.560566   78814 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:10.560572   78814 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:10.560576   78814 cri.go:89] found id: ""
	I1123 09:25:10.560634   78814 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:10.575566   78814 out.go:203] 
	W1123 09:25:10.576739   78814 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:10.576759   78814 out.go:285] * 
	* 
	W1123 09:25:10.580846   78814 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:10.582193   78814 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 09:25:07.792588   67870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 09:25:07.796199   67870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 09:25:07.796239   67870 kapi.go:107] duration metric: took 3.668982ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.690148ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-768607 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-768607 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [08f53b7a-89ec-437a-980c-24bcaa783bc7] Pending
helpers_test.go:352: "task-pv-pod" [08f53b7a-89ec-437a-980c-24bcaa783bc7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [08f53b7a-89ec-437a-980c-24bcaa783bc7] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004180238s
addons_test.go:572: (dbg) Run:  kubectl --context addons-768607 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-768607 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-768607 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-768607 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-768607 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-768607 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-768607 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d5bf257b-db65-42a1-ba67-8eccba9ebe74] Pending
helpers_test.go:352: "task-pv-pod-restore" [d5bf257b-db65-42a1-ba67-8eccba9ebe74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d5bf257b-db65-42a1-ba67-8eccba9ebe74] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003428468s
addons_test.go:614: (dbg) Run:  kubectl --context addons-768607 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-768607 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-768607 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (245.898173ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:26:11.258865   81942 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:26:11.259117   81942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:26:11.259126   81942 out.go:374] Setting ErrFile to fd 2...
	I1123 09:26:11.259130   81942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:26:11.259325   81942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:26:11.259576   81942 mustload.go:66] Loading cluster: addons-768607
	I1123 09:26:11.259976   81942 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:11.259993   81942 addons.go:622] checking whether the cluster is paused
	I1123 09:26:11.260081   81942 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:11.260114   81942 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:26:11.260485   81942 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:26:11.278025   81942 ssh_runner.go:195] Run: systemctl --version
	I1123 09:26:11.278079   81942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:26:11.295023   81942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:26:11.394737   81942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:26:11.394805   81942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:26:11.423690   81942 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:26:11.423720   81942 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:26:11.423727   81942 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:26:11.423732   81942 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:26:11.423737   81942 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:26:11.423743   81942 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:26:11.423748   81942 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:26:11.423753   81942 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:26:11.423758   81942 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:26:11.423765   81942 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:26:11.423770   81942 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:26:11.423775   81942 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:26:11.423780   81942 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:26:11.423785   81942 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:26:11.423790   81942 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:26:11.423797   81942 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:26:11.423805   81942 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:26:11.423812   81942 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:26:11.423822   81942 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:26:11.423827   81942 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:26:11.423830   81942 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:26:11.423833   81942 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:26:11.423835   81942 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:26:11.423838   81942 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:26:11.423841   81942 cri.go:89] found id: ""
	I1123 09:26:11.423880   81942 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:26:11.438465   81942 out.go:203] 
	W1123 09:26:11.439858   81942 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:26:11.439876   81942 out.go:285] * 
	* 
	W1123 09:26:11.443917   81942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:26:11.445128   81942 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (245.056621ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:26:11.505636   82004 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:26:11.505922   82004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:26:11.505933   82004 out.go:374] Setting ErrFile to fd 2...
	I1123 09:26:11.505948   82004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:26:11.506159   82004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:26:11.506423   82004 mustload.go:66] Loading cluster: addons-768607
	I1123 09:26:11.506731   82004 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:11.506745   82004 addons.go:622] checking whether the cluster is paused
	I1123 09:26:11.506828   82004 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:11.506839   82004 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:26:11.507286   82004 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:26:11.524496   82004 ssh_runner.go:195] Run: systemctl --version
	I1123 09:26:11.524541   82004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:26:11.541158   82004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:26:11.641233   82004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:26:11.641330   82004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:26:11.670209   82004 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:26:11.670266   82004 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:26:11.670273   82004 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:26:11.670278   82004 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:26:11.670283   82004 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:26:11.670288   82004 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:26:11.670292   82004 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:26:11.670297   82004 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:26:11.670302   82004 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:26:11.670316   82004 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:26:11.670325   82004 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:26:11.670329   82004 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:26:11.670334   82004 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:26:11.670339   82004 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:26:11.670344   82004 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:26:11.670357   82004 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:26:11.670364   82004 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:26:11.670369   82004 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:26:11.670373   82004 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:26:11.670377   82004 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:26:11.670382   82004 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:26:11.670390   82004 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:26:11.670395   82004 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:26:11.670403   82004 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:26:11.670408   82004 cri.go:89] found id: ""
	I1123 09:26:11.670467   82004 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:26:11.684285   82004 out.go:203] 
	W1123 09:26:11.685330   82004 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:26:11.685354   82004 out.go:285] * 
	* 
	W1123 09:26:11.689239   82004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:26:11.690593   82004 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (63.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-768607 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-768607 --alsologtostderr -v=1: exit status 11 (286.685006ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:02.591559   77687 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:02.591876   77687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:02.591886   77687 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:02.591890   77687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:02.592098   77687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:02.592396   77687 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:02.592718   77687 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:02.592734   77687 addons.go:622] checking whether the cluster is paused
	I1123 09:25:02.592814   77687 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:02.592826   77687 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:02.593291   77687 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:02.613194   77687 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:02.613253   77687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:02.633018   77687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:02.739422   77687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:02.739536   77687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:02.773282   77687 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:02.773309   77687 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:02.773316   77687 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:02.773321   77687 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:02.773334   77687 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:02.773338   77687 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:02.773341   77687 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:02.773344   77687 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:02.773348   77687 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:02.773356   77687 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:02.773362   77687 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:02.773366   77687 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:02.773371   77687 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:02.773376   77687 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:02.773380   77687 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:02.773398   77687 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:02.773409   77687 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:02.773415   77687 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:02.773419   77687 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:02.773424   77687 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:02.773428   77687 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:02.773433   77687 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:02.773436   77687 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:02.773438   77687 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:02.773441   77687 cri.go:89] found id: ""
	I1123 09:25:02.773491   77687 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:02.793117   77687 out.go:203] 
	W1123 09:25:02.795045   77687 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:02.795074   77687 out.go:285] * 
	* 
	W1123 09:25:02.800587   77687 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:02.802025   77687 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-768607 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-768607
helpers_test.go:243: (dbg) docker inspect addons-768607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2",
	        "Created": "2025-11-23T09:23:02.86656893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69991,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:23:02.897619684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/hosts",
	        "LogPath": "/var/lib/docker/containers/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2/6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2-json.log",
	        "Name": "/addons-768607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-768607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-768607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e966db2f1a57a063d5b1f4866cae1e860dd794b89727fc482702ed6ac3082b2",
	                "LowerDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2a7f2104ed49d12c661afd063ce774ea22c13012302c7cf4abbbe5d18af635c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-768607",
	                "Source": "/var/lib/docker/volumes/addons-768607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-768607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-768607",
	                "name.minikube.sigs.k8s.io": "addons-768607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ade245353e082f83fd8f44e41d063370cfe3240a56a17ac35203712ce7ac5053",
	            "SandboxKey": "/var/run/docker/netns/ade245353e08",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-768607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d05a2e060af245071e9e38162d3b1dfea063be4b3ecf7939f3ceb965fdb3a2a7",
	                    "EndpointID": "12a9efc2699fe940833b0219ad40f1acc062c309e4d3677f6f31c7e2141ecdba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "76:16:1b:7f:3a:95",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-768607",
	                        "6e966db2f1a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-768607 -n addons-768607
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-768607 logs -n 25: (1.181612865s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-734762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-734762   │ jenkins │ v1.37.0 │ 23 Nov 25 09:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-734762                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-734762   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ start   │ -o=json --download-only -p download-only-581985 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-581985   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-581985                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-581985   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-734762                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-734762   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-581985                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-581985   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ start   │ --download-only -p download-docker-707806 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-707806 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ delete  │ -p download-docker-707806                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-707806 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ start   │ --download-only -p binary-mirror-045361 --alsologtostderr --binary-mirror http://127.0.0.1:36233 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045361   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ delete  │ -p binary-mirror-045361                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-045361   │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ addons  │ enable dashboard -p addons-768607                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ addons  │ disable dashboard -p addons-768607                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	│ start   │ -p addons-768607 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-768607 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │                     │
	│ addons  │ addons-768607 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ addons  │ enable headlamp -p addons-768607 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768607          │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:22:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:22:41.845178   69327 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:22:41.845442   69327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:41.845452   69327 out.go:374] Setting ErrFile to fd 2...
	I1123 09:22:41.845456   69327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:41.845647   69327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:22:41.846174   69327 out.go:368] Setting JSON to false
	I1123 09:22:41.846976   69327 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7503,"bootTime":1763882259,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:22:41.847027   69327 start.go:143] virtualization: kvm guest
	I1123 09:22:41.848606   69327 out.go:179] * [addons-768607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:22:41.849584   69327 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:22:41.849652   69327 notify.go:221] Checking for updates...
	I1123 09:22:41.851424   69327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:22:41.852541   69327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:22:41.853546   69327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:22:41.854398   69327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:22:41.855153   69327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:22:41.856138   69327 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:22:41.877291   69327 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:22:41.877405   69327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:41.935628   69327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:41.926466694 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:41.935738   69327 docker.go:319] overlay module found
	I1123 09:22:41.937697   69327 out.go:179] * Using the docker driver based on user configuration
	I1123 09:22:41.938581   69327 start.go:309] selected driver: docker
	I1123 09:22:41.938599   69327 start.go:927] validating driver "docker" against <nil>
	I1123 09:22:41.938611   69327 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:22:41.939144   69327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:41.996634   69327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:41.987036699 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:41.996880   69327 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:22:41.997172   69327 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:22:41.998547   69327 out.go:179] * Using Docker driver with root privileges
	I1123 09:22:41.999372   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:22:41.999451   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:22:41.999463   69327 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:22:41.999553   69327 start.go:353] cluster config:
	{Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 09:22:42.000586   69327 out.go:179] * Starting "addons-768607" primary control-plane node in "addons-768607" cluster
	I1123 09:22:42.001477   69327 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:22:42.002514   69327 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:22:42.003510   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:42.003538   69327 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:22:42.003546   69327 cache.go:65] Caching tarball of preloaded images
	I1123 09:22:42.003586   69327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:22:42.003622   69327 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:22:42.003633   69327 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:22:42.003963   69327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json ...
	I1123 09:22:42.003986   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json: {Name:mk172409a5230dba5b2cb2ce3fd515465b507f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:22:42.019536   69327 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:22:42.019669   69327 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:22:42.019686   69327 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 09:22:42.019691   69327 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 09:22:42.019702   69327 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 09:22:42.019712   69327 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 09:22:54.705848   69327 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 09:22:54.705888   69327 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:22:54.705938   69327 start.go:360] acquireMachinesLock for addons-768607: {Name:mkc7494b2a4d470d5bd9858d5c41d565f6324348 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:22:54.706041   69327 start.go:364] duration metric: took 80.772µs to acquireMachinesLock for "addons-768607"
	I1123 09:22:54.706065   69327 start.go:93] Provisioning new machine with config: &{Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:22:54.706158   69327 start.go:125] createHost starting for "" (driver="docker")
	I1123 09:22:54.707610   69327 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 09:22:54.707855   69327 start.go:159] libmachine.API.Create for "addons-768607" (driver="docker")
	I1123 09:22:54.707888   69327 client.go:173] LocalClient.Create starting
	I1123 09:22:54.708018   69327 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 09:22:54.740873   69327 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 09:22:55.010083   69327 cli_runner.go:164] Run: docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:22:55.028016   69327 cli_runner.go:211] docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:22:55.028109   69327 network_create.go:284] running [docker network inspect addons-768607] to gather additional debugging logs...
	I1123 09:22:55.028134   69327 cli_runner.go:164] Run: docker network inspect addons-768607
	W1123 09:22:55.043647   69327 cli_runner.go:211] docker network inspect addons-768607 returned with exit code 1
	I1123 09:22:55.043674   69327 network_create.go:287] error running [docker network inspect addons-768607]: docker network inspect addons-768607: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-768607 not found
	I1123 09:22:55.043699   69327 network_create.go:289] output of [docker network inspect addons-768607]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-768607 not found
	
	** /stderr **
	I1123 09:22:55.043811   69327 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:22:55.060754   69327 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f3020}
	I1123 09:22:55.060791   69327 network_create.go:124] attempt to create docker network addons-768607 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 09:22:55.060839   69327 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-768607 addons-768607
	I1123 09:22:55.105657   69327 network_create.go:108] docker network addons-768607 192.168.49.0/24 created
	I1123 09:22:55.105696   69327 kic.go:121] calculated static IP "192.168.49.2" for the "addons-768607" container
	I1123 09:22:55.105767   69327 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:22:55.121034   69327 cli_runner.go:164] Run: docker volume create addons-768607 --label name.minikube.sigs.k8s.io=addons-768607 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:22:55.138081   69327 oci.go:103] Successfully created a docker volume addons-768607
	I1123 09:22:55.138177   69327 cli_runner.go:164] Run: docker run --rm --name addons-768607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --entrypoint /usr/bin/test -v addons-768607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:22:58.609105   69327 cli_runner.go:217] Completed: docker run --rm --name addons-768607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --entrypoint /usr/bin/test -v addons-768607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (3.470859205s)
	I1123 09:22:58.609145   69327 oci.go:107] Successfully prepared a docker volume addons-768607
	I1123 09:22:58.609200   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:58.609216   69327 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:22:58.609304   69327 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-768607:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:23:02.790011   69327 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-768607:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.180629873s)
	I1123 09:23:02.790049   69327 kic.go:203] duration metric: took 4.180829291s to extract preloaded images to volume ...
	W1123 09:23:02.790376   69327 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:23:02.790430   69327 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:23:02.790486   69327 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:23:02.849460   69327 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-768607 --name addons-768607 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-768607 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-768607 --network addons-768607 --ip 192.168.49.2 --volume addons-768607:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:23:03.165152   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Running}}
	I1123 09:23:03.183675   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.201623   69327 cli_runner.go:164] Run: docker exec addons-768607 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:23:03.248707   69327 oci.go:144] the created container "addons-768607" has a running status.
	I1123 09:23:03.248742   69327 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa...
	I1123 09:23:03.418239   69327 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:23:03.445982   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.472604   69327 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:23:03.472633   69327 kic_runner.go:114] Args: [docker exec --privileged addons-768607 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:23:03.523554   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:03.546116   69327 machine.go:94] provisionDockerMachine start ...
	I1123 09:23:03.546227   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.566920   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.567214   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.567241   69327 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:23:03.712999   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-768607
	
	I1123 09:23:03.713058   69327 ubuntu.go:182] provisioning hostname "addons-768607"
	I1123 09:23:03.713194   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.732920   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.733239   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.733302   69327 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-768607 && echo "addons-768607" | sudo tee /etc/hostname
	I1123 09:23:03.887999   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-768607
	
	I1123 09:23:03.888115   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:03.905952   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:03.906210   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:03.906235   69327 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-768607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-768607/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-768607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:23:04.050203   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:23:04.050248   69327 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 09:23:04.050324   69327 ubuntu.go:190] setting up certificates
	I1123 09:23:04.050354   69327 provision.go:84] configureAuth start
	I1123 09:23:04.050441   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.067805   69327 provision.go:143] copyHostCerts
	I1123 09:23:04.067923   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 09:23:04.068045   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 09:23:04.068130   69327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 09:23:04.068197   69327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.addons-768607 san=[127.0.0.1 192.168.49.2 addons-768607 localhost minikube]
	I1123 09:23:04.159128   69327 provision.go:177] copyRemoteCerts
	I1123 09:23:04.159193   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:23:04.159233   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.176940   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.278711   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:23:04.298581   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:23:04.316897   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:23:04.334219   69327 provision.go:87] duration metric: took 283.834823ms to configureAuth
	I1123 09:23:04.334251   69327 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:23:04.334561   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:04.334724   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.352792   69327 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:04.353071   69327 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 09:23:04.353115   69327 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:23:04.636919   69327 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:23:04.636946   69327 machine.go:97] duration metric: took 1.090791665s to provisionDockerMachine
	I1123 09:23:04.636958   69327 client.go:176] duration metric: took 9.929061873s to LocalClient.Create
	I1123 09:23:04.636978   69327 start.go:167] duration metric: took 9.92912503s to libmachine.API.Create "addons-768607"
	I1123 09:23:04.636993   69327 start.go:293] postStartSetup for "addons-768607" (driver="docker")
	I1123 09:23:04.637006   69327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:23:04.637062   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:23:04.637122   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.654110   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.757065   69327 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:23:04.760730   69327 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:23:04.760756   69327 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:23:04.760769   69327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 09:23:04.760829   69327 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 09:23:04.760853   69327 start.go:296] duration metric: took 123.854136ms for postStartSetup
	I1123 09:23:04.761182   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.778522   69327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/config.json ...
	I1123 09:23:04.778814   69327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:23:04.778871   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.797143   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.895204   69327 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:23:04.899849   69327 start.go:128] duration metric: took 10.193673517s to createHost
	I1123 09:23:04.899877   69327 start.go:83] releasing machines lock for "addons-768607", held for 10.193824633s
	I1123 09:23:04.899951   69327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-768607
	I1123 09:23:04.916463   69327 ssh_runner.go:195] Run: cat /version.json
	I1123 09:23:04.916503   69327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:23:04.916523   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.916572   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:04.934644   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:04.935936   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:05.088006   69327 ssh_runner.go:195] Run: systemctl --version
	I1123 09:23:05.094414   69327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:23:05.128787   69327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:23:05.133338   69327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:23:05.133391   69327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:23:05.158905   69327 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:23:05.158928   69327 start.go:496] detecting cgroup driver to use...
	I1123 09:23:05.158963   69327 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:23:05.159017   69327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:23:05.174583   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:23:05.186476   69327 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:23:05.186539   69327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:23:05.202722   69327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:23:05.219530   69327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:23:05.298512   69327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:23:05.382299   69327 docker.go:234] disabling docker service ...
	I1123 09:23:05.382367   69327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:23:05.400047   69327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:23:05.412281   69327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:23:05.495822   69327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:23:05.576005   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:23:05.588612   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:23:05.602458   69327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:23:05.602511   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.612805   69327 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:23:05.612869   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.621707   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.630359   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.638875   69327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:23:05.646785   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.655542   69327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.668796   69327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:23:05.677299   69327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:23:05.684501   69327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 09:23:05.684578   69327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 09:23:05.696079   69327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:23:05.703336   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:05.783105   69327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:23:05.916567   69327 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:23:05.916641   69327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:23:05.920552   69327 start.go:564] Will wait 60s for crictl version
	I1123 09:23:05.920616   69327 ssh_runner.go:195] Run: which crictl
	I1123 09:23:05.923971   69327 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:23:05.948178   69327 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:23:05.948275   69327 ssh_runner.go:195] Run: crio --version
	I1123 09:23:05.975840   69327 ssh_runner.go:195] Run: crio --version
	I1123 09:23:06.004747   69327 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:23:06.005848   69327 cli_runner.go:164] Run: docker network inspect addons-768607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:23:06.021825   69327 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:23:06.025743   69327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:23:06.035593   69327 kubeadm.go:884] updating cluster {Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:23:06.035745   69327 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:23:06.035798   69327 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:23:06.065768   69327 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:23:06.065794   69327 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:23:06.065842   69327 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:23:06.090810   69327 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:23:06.090832   69327 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:23:06.090842   69327 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 09:23:06.090934   69327 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-768607 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:23:06.091004   69327 ssh_runner.go:195] Run: crio config
	I1123 09:23:06.136226   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:23:06.136252   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:23:06.136274   69327 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:23:06.136305   69327 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-768607 NodeName:addons-768607 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:23:06.136457   69327 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-768607"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:23:06.136530   69327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:23:06.144621   69327 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:23:06.144704   69327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:23:06.152199   69327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:23:06.164411   69327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:23:06.179276   69327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 09:23:06.191428   69327 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:23:06.194885   69327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:23:06.204489   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:06.281510   69327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:23:06.305442   69327 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607 for IP: 192.168.49.2
	I1123 09:23:06.305468   69327 certs.go:195] generating shared ca certs ...
	I1123 09:23:06.305487   69327 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.305624   69327 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 09:23:06.392514   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt ...
	I1123 09:23:06.392545   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt: {Name:mkb0b2f20c82c92a595b06060c9b28d59726abb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.392711   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key ...
	I1123 09:23:06.392722   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key: {Name:mk0e916e50a2a76a994240de1927c80f62fdb3ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.392795   69327 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 09:23:06.466894   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt ...
	I1123 09:23:06.466923   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt: {Name:mkf6247adea6b984cc4f63b3f8a2487a7fd6e5f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.467082   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key ...
	I1123 09:23:06.467106   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key: {Name:mkd55ce5b37dd005e47af224f829fd3cd6df381e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.467180   69327 certs.go:257] generating profile certs ...
	I1123 09:23:06.467255   69327 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key
	I1123 09:23:06.467271   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt with IP's: []
	I1123 09:23:06.629150   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt ...
	I1123 09:23:06.629184   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: {Name:mk6e6fbdb023797ced59d7c2fefde3822f09ba65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.629351   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key ...
	I1123 09:23:06.629363   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.key: {Name:mk1e879382e5b1ad328d77fd893a51a75b477bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.629434   69327 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45
	I1123 09:23:06.629457   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 09:23:06.756051   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 ...
	I1123 09:23:06.756084   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45: {Name:mkc56f10ea3bbeb10badaf9747f7867d6936e98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.756256   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45 ...
	I1123 09:23:06.756269   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45: {Name:mk438501f50e865a11b9d5fbb813ea11f0ed7beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.756337   69327 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt.6a296f45 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt
	I1123 09:23:06.756411   69327 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key.6a296f45 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key
	I1123 09:23:06.756461   69327 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key
	I1123 09:23:06.756481   69327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt with IP's: []
	I1123 09:23:06.773293   69327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt ...
	I1123 09:23:06.773317   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt: {Name:mk4e53bb9dce8aa26c68ece66b65e11396e99a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.773446   69327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key ...
	I1123 09:23:06.773456   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key: {Name:mk24afcff5b8283bc06b53a25a5501bfd9b6a1bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:06.773614   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 09:23:06.773649   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:23:06.773674   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:23:06.773699   69327 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 09:23:06.774278   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:23:06.792437   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:23:06.809224   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:23:06.826226   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:23:06.842879   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:23:06.859663   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:23:06.876306   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:23:06.892887   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:23:06.909695   69327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:23:06.927886   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:23:06.939758   69327 ssh_runner.go:195] Run: openssl version
	I1123 09:23:06.945661   69327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:23:06.955893   69327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.959316   69327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.959369   69327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:23:06.993189   69327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:23:07.001970   69327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:23:07.005411   69327 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:23:07.005477   69327 kubeadm.go:401] StartCluster: {Name:addons-768607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:07.005581   69327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:23:07.005638   69327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:23:07.031748   69327 cri.go:89] found id: ""
	I1123 09:23:07.031818   69327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:23:07.039684   69327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:23:07.047505   69327 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:23:07.047563   69327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:23:07.055062   69327 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:23:07.055080   69327 kubeadm.go:158] found existing configuration files:
	
	I1123 09:23:07.055147   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:23:07.062466   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:23:07.062520   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:23:07.069889   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:23:07.077584   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:23:07.077641   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:23:07.084751   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:23:07.092127   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:23:07.092191   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:23:07.099537   69327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:23:07.107067   69327 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:23:07.107138   69327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:23:07.114240   69327 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:23:07.150396   69327 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:23:07.150479   69327 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:23:07.183101   69327 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:23:07.183197   69327 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:23:07.183251   69327 kubeadm.go:319] OS: Linux
	I1123 09:23:07.183326   69327 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:23:07.183386   69327 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:23:07.183453   69327 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:23:07.183522   69327 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:23:07.183592   69327 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:23:07.183667   69327 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:23:07.183733   69327 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:23:07.183804   69327 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:23:07.239833   69327 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:23:07.239999   69327 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:23:07.240147   69327 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:23:07.247511   69327 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:23:07.249229   69327 out.go:252]   - Generating certificates and keys ...
	I1123 09:23:07.249319   69327 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:23:07.249383   69327 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:23:07.654184   69327 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:23:07.938866   69327 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:23:08.066210   69327 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:23:08.152082   69327 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:23:08.273989   69327 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:23:08.274130   69327 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-768607 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 09:23:08.598691   69327 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:23:08.598864   69327 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-768607 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 09:23:08.990245   69327 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:23:09.208570   69327 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:23:09.461890   69327 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:23:09.461969   69327 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:23:09.596030   69327 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:23:10.494405   69327 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:23:10.719186   69327 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:23:10.809276   69327 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:23:11.417821   69327 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:23:11.418434   69327 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:23:11.421971   69327 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:23:11.423330   69327 out.go:252]   - Booting up control plane ...
	I1123 09:23:11.423446   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:23:11.423556   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:23:11.424260   69327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:23:11.438487   69327 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:23:11.438617   69327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:23:11.444841   69327 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:23:11.445150   69327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:23:11.445214   69327 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:23:11.538119   69327 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:23:11.538296   69327 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:23:12.039796   69327 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.810666ms
	I1123 09:23:12.042527   69327 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:23:12.042652   69327 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 09:23:12.042778   69327 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:23:12.042848   69327 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:23:13.692589   69327 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.649903678s
	I1123 09:23:13.841728   69327 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.799124192s
	I1123 09:23:15.544311   69327 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609481s
	I1123 09:23:15.554895   69327 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:23:15.565328   69327 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:23:15.572967   69327 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:23:15.573188   69327 kubeadm.go:319] [mark-control-plane] Marking the node addons-768607 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:23:15.580051   69327 kubeadm.go:319] [bootstrap-token] Using token: 4hjpo5.joyzmp41y87gwlxq
	I1123 09:23:15.582107   69327 out.go:252]   - Configuring RBAC rules ...
	I1123 09:23:15.582244   69327 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:23:15.586320   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:23:15.590869   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:23:15.593804   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:23:15.596118   69327 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:23:15.598300   69327 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:23:15.949911   69327 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:23:16.362448   69327 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:23:16.950458   69327 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:23:16.951594   69327 kubeadm.go:319] 
	I1123 09:23:16.951715   69327 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:23:16.951734   69327 kubeadm.go:319] 
	I1123 09:23:16.951822   69327 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:23:16.951833   69327 kubeadm.go:319] 
	I1123 09:23:16.951873   69327 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:23:16.951948   69327 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:23:16.951998   69327 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:23:16.952005   69327 kubeadm.go:319] 
	I1123 09:23:16.952078   69327 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:23:16.952108   69327 kubeadm.go:319] 
	I1123 09:23:16.952150   69327 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:23:16.952157   69327 kubeadm.go:319] 
	I1123 09:23:16.952215   69327 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:23:16.952332   69327 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:23:16.952431   69327 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:23:16.952442   69327 kubeadm.go:319] 
	I1123 09:23:16.952556   69327 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:23:16.952659   69327 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:23:16.952674   69327 kubeadm.go:319] 
	I1123 09:23:16.952791   69327 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4hjpo5.joyzmp41y87gwlxq \
	I1123 09:23:16.952910   69327 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 09:23:16.952935   69327 kubeadm.go:319] 	--control-plane 
	I1123 09:23:16.952941   69327 kubeadm.go:319] 
	I1123 09:23:16.953018   69327 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:23:16.953034   69327 kubeadm.go:319] 
	I1123 09:23:16.953162   69327 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4hjpo5.joyzmp41y87gwlxq \
	I1123 09:23:16.953264   69327 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 09:23:16.955314   69327 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:23:16.955466   69327 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:23:16.955506   69327 cni.go:84] Creating CNI manager for ""
	I1123 09:23:16.955525   69327 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:23:16.956711   69327 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:23:16.957740   69327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:23:16.962022   69327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:23:16.962040   69327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:23:16.974584   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:23:17.166897   69327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:23:17.166988   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:17.167001   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-768607 minikube.k8s.io/updated_at=2025_11_23T09_23_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=addons-768607 minikube.k8s.io/primary=true
	I1123 09:23:17.176132   69327 ops.go:34] apiserver oom_adj: -16
	I1123 09:23:17.236581   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:17.736781   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:18.236921   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:18.737583   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:19.237627   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:19.737060   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:20.237135   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:20.737532   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.237306   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.736925   69327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:23:21.802949   69327 kubeadm.go:1114] duration metric: took 4.63603113s to wait for elevateKubeSystemPrivileges
	I1123 09:23:21.802997   69327 kubeadm.go:403] duration metric: took 14.79752545s to StartCluster
	I1123 09:23:21.803023   69327 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:21.803156   69327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:23:21.803634   69327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:23:21.803836   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:23:21.803862   69327 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:23:21.803926   69327 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 09:23:21.804068   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:21.804117   69327 addons.go:70] Setting ingress-dns=true in profile "addons-768607"
	I1123 09:23:21.804115   69327 addons.go:70] Setting default-storageclass=true in profile "addons-768607"
	I1123 09:23:21.804129   69327 addons.go:70] Setting gcp-auth=true in profile "addons-768607"
	I1123 09:23:21.804138   69327 addons.go:70] Setting registry-creds=true in profile "addons-768607"
	I1123 09:23:21.804141   69327 addons.go:239] Setting addon ingress-dns=true in "addons-768607"
	I1123 09:23:21.804082   69327 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-768607"
	I1123 09:23:21.804152   69327 addons.go:239] Setting addon registry-creds=true in "addons-768607"
	I1123 09:23:21.804155   69327 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-768607"
	I1123 09:23:21.804162   69327 mustload.go:66] Loading cluster: addons-768607
	I1123 09:23:21.804168   69327 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-768607"
	I1123 09:23:21.804180   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804189   69327 addons.go:70] Setting storage-provisioner=true in profile "addons-768607"
	I1123 09:23:21.804198   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804200   69327 addons.go:239] Setting addon storage-provisioner=true in "addons-768607"
	I1123 09:23:21.804192   69327 addons.go:70] Setting metrics-server=true in profile "addons-768607"
	I1123 09:23:21.804216   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804224   69327 addons.go:239] Setting addon metrics-server=true in "addons-768607"
	I1123 09:23:21.804250   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804253   69327 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-768607"
	I1123 09:23:21.804279   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804410   69327 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:21.804675   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804721   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804748   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804751   69327 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-768607"
	I1123 09:23:21.804751   69327 addons.go:70] Setting inspektor-gadget=true in profile "addons-768607"
	I1123 09:23:21.804764   69327 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-768607"
	I1123 09:23:21.804767   69327 addons.go:239] Setting addon inspektor-gadget=true in "addons-768607"
	I1123 09:23:21.804782   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804792   69327 addons.go:70] Setting registry=true in profile "addons-768607"
	I1123 09:23:21.804807   69327 addons.go:239] Setting addon registry=true in "addons-768607"
	I1123 09:23:21.804824   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.805220   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.805255   69327 addons.go:70] Setting volcano=true in profile "addons-768607"
	I1123 09:23:21.805271   69327 addons.go:239] Setting addon volcano=true in "addons-768607"
	I1123 09:23:21.805301   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.805703   69327 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-768607"
	I1123 09:23:21.805727   69327 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-768607"
	I1123 09:23:21.806004   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.806097   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804181   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.806704   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.806883   69327 addons.go:70] Setting volumesnapshots=true in profile "addons-768607"
	I1123 09:23:21.806900   69327 addons.go:239] Setting addon volumesnapshots=true in "addons-768607"
	I1123 09:23:21.806936   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.807225   69327 addons.go:70] Setting cloud-spanner=true in profile "addons-768607"
	I1123 09:23:21.807260   69327 addons.go:239] Setting addon cloud-spanner=true in "addons-768607"
	I1123 09:23:21.807288   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.807437   69327 out.go:179] * Verifying Kubernetes components...
	I1123 09:23:21.804145   69327 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-768607"
	I1123 09:23:21.804074   69327 addons.go:70] Setting yakd=true in profile "addons-768607"
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804109   69327 addons.go:70] Setting ingress=true in profile "addons-768607"
	I1123 09:23:21.807713   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.809856   69327 addons.go:239] Setting addon ingress=true in "addons-768607"
	I1123 09:23:21.810036   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.804783   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.810702   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.810796   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.804739   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.811969   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.812206   69327 addons.go:239] Setting addon yakd=true in "addons-768607"
	I1123 09:23:21.813107   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.813497   69327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:23:21.817998   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.818976   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.819966   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.858486   69327 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 09:23:21.860948   69327 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 09:23:21.863845   69327 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 09:23:21.863868   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 09:23:21.863958   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.869604   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 09:23:21.873201   69327 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 09:23:21.876702   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 09:23:21.876774   69327 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:23:21.876809   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 09:23:21.876915   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.878718   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 09:23:21.881132   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.886285   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 09:23:21.887361   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 09:23:21.890459   69327 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 09:23:21.891448   69327 addons.go:239] Setting addon default-storageclass=true in "addons-768607"
	I1123 09:23:21.891497   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.891624   69327 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-768607"
	I1123 09:23:21.891654   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:21.891714   69327 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:23:21.891729   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 09:23:21.891796   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.891984   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 09:23:21.892096   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.892130   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:21.893277   69327 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:23:21.894835   69327 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:23:21.894855   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:23:21.894918   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.895898   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 09:23:21.896874   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 09:23:21.897964   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 09:23:21.897987   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 09:23:21.898049   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	W1123 09:23:21.908848   69327 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 09:23:21.913120   69327 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 09:23:21.914202   69327 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 09:23:21.914226   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 09:23:21.914291   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.921579   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 09:23:21.922014   69327 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 09:23:21.924127   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:21.924194   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 09:23:21.924206   69327 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 09:23:21.924283   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.925941   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:21.929884   69327 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:23:21.929908   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 09:23:21.930009   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.930219   69327 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 09:23:21.931534   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 09:23:21.931554   69327 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 09:23:21.931661   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.933667   69327 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 09:23:21.934731   69327 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 09:23:21.935667   69327 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 09:23:21.935691   69327 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:23:21.935707   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 09:23:21.935766   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.936642   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:23:21.936666   69327 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:23:21.936715   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.936712   69327 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:23:21.936757   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 09:23:21.936820   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.949707   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.953374   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.970621   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.971898   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.973630   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.974876   69327 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 09:23:21.976502   69327 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:23:21.976523   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 09:23:21.976574   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.976967   69327 out.go:179]   - Using image docker.io/busybox:stable
	I1123 09:23:21.978115   69327 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 09:23:21.979222   69327 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:23:21.979241   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 09:23:21.979294   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.979312   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.981929   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.982070   69327 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:23:21.983134   69327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:23:21.983570   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:21.989925   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.992193   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:21.992685   69327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:23:21.999903   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.005184   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.012459   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	W1123 09:23:22.013028   69327 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 09:23:22.013082   69327 retry.go:31] will retry after 148.455388ms: ssh: handshake failed: EOF
	I1123 09:23:22.017714   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	W1123 09:23:22.019072   69327 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 09:23:22.019110   69327 retry.go:31] will retry after 210.280055ms: ssh: handshake failed: EOF
	I1123 09:23:22.030609   69327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:23:22.032441   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.034511   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:22.103848   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:23:22.137018   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 09:23:22.148398   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:23:22.151877   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 09:23:22.151899   69327 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 09:23:22.155953   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 09:23:22.156041   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 09:23:22.157358   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:23:22.169801   69327 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 09:23:22.169826   69327 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 09:23:22.171671   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 09:23:22.171696   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 09:23:22.176935   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:23:22.197626   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:23:22.198777   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:23:22.200881   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 09:23:22.200903   69327 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 09:23:22.201544   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:23:22.206593   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:23:22.211899   69327 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:23:22.211916   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 09:23:22.212015   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 09:23:22.212022   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 09:23:22.224795   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 09:23:22.224820   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 09:23:22.235741   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 09:23:22.235827   69327 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 09:23:22.270242   69327 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 09:23:22.270272   69327 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 09:23:22.275199   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:23:22.276727   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 09:23:22.276744   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 09:23:22.308451   69327 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:23:22.308473   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 09:23:22.336747   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 09:23:22.336852   69327 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 09:23:22.354428   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 09:23:22.354459   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 09:23:22.354871   69327 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 09:23:22.355690   69327 node_ready.go:35] waiting up to 6m0s for node "addons-768607" to be "Ready" ...
	I1123 09:23:22.356378   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:23:22.363932   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:23:22.404808   69327 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:22.404835   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 09:23:22.423709   69327 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 09:23:22.423807   69327 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 09:23:22.468580   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:22.476779   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 09:23:22.476959   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 09:23:22.510190   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:23:22.510287   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 09:23:22.536548   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 09:23:22.536602   69327 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 09:23:22.571991   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:23:22.572021   69327 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:23:22.599055   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 09:23:22.599078   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 09:23:22.611834   69327 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:23:22.611861   69327 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:23:22.651680   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:23:22.655555   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 09:23:22.655579   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 09:23:22.685491   69327 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:23:22.685520   69327 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 09:23:22.719907   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:23:22.883867   69327 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-768607" context rescaled to 1 replicas
	I1123 09:23:23.108893   69327 addons.go:495] Verifying addon registry=true in "addons-768607"
	I1123 09:23:23.112256   69327 out.go:179] * Verifying registry addon...
	I1123 09:23:23.115131   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 09:23:23.119912   69327 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 09:23:23.119940   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:23.445164   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.088748513s)
	I1123 09:23:23.445222   69327 addons.go:495] Verifying addon ingress=true in "addons-768607"
	I1123 09:23:23.445277   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.081261852s)
	I1123 09:23:23.446545   69327 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-768607 service yakd-dashboard -n yakd-dashboard
	
	I1123 09:23:23.446549   69327 out.go:179] * Verifying ingress addon...
	I1123 09:23:23.449949   69327 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 09:23:23.452400   69327 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 09:23:23.452424   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:23.618494   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:23.788027   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319404953s)
	W1123 09:23:23.788073   69327 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:23:23.788120   69327 retry.go:31] will retry after 241.77724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:23:23.788131   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.136414011s)
	I1123 09:23:23.788173   69327 addons.go:495] Verifying addon metrics-server=true in "addons-768607"
	I1123 09:23:23.788362   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.068416328s)
	I1123 09:23:23.788388   69327 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-768607"
	I1123 09:23:23.790468   69327 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 09:23:23.792480   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 09:23:23.794814   69327 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 09:23:23.794830   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:23.953638   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.030629   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:23:24.118969   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:24.296775   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:24.357872   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:24.453803   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.619110   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:24.794928   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:24.952765   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.117872   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:25.295567   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:25.453772   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.619117   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:25.796198   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:25.953269   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.118804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:26.295341   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:26.358524   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:26.453490   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.460990   69327 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.430320989s)
	I1123 09:23:26.618407   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:26.795819   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:26.953223   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.118737   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:27.295794   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:27.452741   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.618906   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:27.796338   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:27.953034   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.118340   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:28.294985   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:28.359066   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:28.453114   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.618284   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:28.795647   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:28.953546   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.118002   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:29.295757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:29.453484   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.497584   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 09:23:29.497660   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:29.514938   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:29.618149   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:29.626524   69327 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 09:23:29.638141   69327 addons.go:239] Setting addon gcp-auth=true in "addons-768607"
	I1123 09:23:29.638195   69327 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:23:29.638543   69327 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:23:29.656462   69327 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 09:23:29.656512   69327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:23:29.672854   69327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:23:29.770697   69327 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:23:29.771763   69327 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 09:23:29.772693   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 09:23:29.772707   69327 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 09:23:29.785893   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 09:23:29.785912   69327 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 09:23:29.795636   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:29.798659   69327 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:23:29.798674   69327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 09:23:29.810716   69327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:23:29.952842   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.118138   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:30.122589   69327 addons.go:495] Verifying addon gcp-auth=true in "addons-768607"
	I1123 09:23:30.123782   69327 out.go:179] * Verifying gcp-auth addon...
	I1123 09:23:30.125814   69327 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 09:23:30.128446   69327 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 09:23:30.128462   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.295112   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:30.453326   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.618122   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:30.627879   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.795647   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:30.859013   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:30.952706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.118444   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:31.128297   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.296395   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:31.453602   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.618301   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:31.628316   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.796395   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:31.953303   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.117878   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:32.129161   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:32.453394   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.617935   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:32.628968   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.795758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:32.952515   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.118256   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:33.128339   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.296358   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:33.358915   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:33.453498   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.618179   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:33.628236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.795966   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:33.952901   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.118345   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:34.128409   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.296117   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:34.453159   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.618627   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:34.628811   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.795351   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:34.953312   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.117934   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:35.129034   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.295978   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:35.453581   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.618323   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:35.628512   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.795413   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:35.858735   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:35.953312   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.118892   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:36.129033   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.296053   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:36.453190   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.619336   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:36.628417   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.796235   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:36.953128   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.119172   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:37.128120   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.296270   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:37.453637   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.618777   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:37.628236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.796306   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:37.858834   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:37.953436   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.118329   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:38.128296   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.296055   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:38.453402   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.618211   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:38.628430   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.796354   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:38.953715   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.118413   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:39.128585   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.295669   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:39.452895   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.618480   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:39.628482   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.794948   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:39.952640   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.118753   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:40.128646   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.295260   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:40.358856   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:40.453781   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.618426   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:40.628349   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.795957   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:40.952765   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.118467   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:41.128553   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.295757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:41.452803   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.618629   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:41.628822   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.795606   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:41.952820   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.118625   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:42.128377   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.296125   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:42.453368   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.618122   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:42.628289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.795939   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:42.858461   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:42.953176   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.118810   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:43.128929   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.296252   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:43.453132   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.618703   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:43.628804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.795327   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:43.953144   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.118801   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:44.129097   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.295929   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:44.453774   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.618398   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:44.628685   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.795334   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:44.858681   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:44.952906   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.118881   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:45.128774   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:45.453214   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.617695   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:45.628789   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.795429   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:45.952769   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.118705   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:46.128830   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.295423   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:46.452702   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.618349   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:46.628421   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.796159   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:46.858796   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:46.953110   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.119126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:47.128173   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.296601   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:47.452979   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.618855   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:47.628877   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.795432   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:47.953503   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.118253   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:48.128322   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.296189   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:48.453509   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.618151   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:48.628276   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.796391   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:48.859172   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:48.952654   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.118554   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:49.128744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.295508   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:49.452589   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.618216   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:49.628123   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.795726   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:49.952297   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.117833   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:50.128678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.295348   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:50.452767   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.618572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:50.628713   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.795424   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:50.952931   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.118873   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:51.128955   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.295850   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:51.358303   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:51.452919   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.618592   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:51.628725   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.795482   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:51.952801   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.118644   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:52.128708   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.295506   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:52.453192   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.619213   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:52.628293   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.795972   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:52.953005   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.118840   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:53.128927   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.295684   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:53.359222   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:53.452693   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.618040   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:53.627992   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.795487   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:53.953793   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.118575   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:54.128463   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.295126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:54.453611   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.618102   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:54.627886   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.795712   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:54.952424   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.117826   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:55.128953   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.295579   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:55.454174   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.617860   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:55.629076   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.795706   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:55.858995   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:55.952336   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.118051   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:56.128063   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.295862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:56.452974   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.618434   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:56.628629   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.795383   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:56.953351   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.117981   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:57.128065   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.295828   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:57.453338   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.617922   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:57.627934   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.795696   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:23:57.859412   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:23:57.952730   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.118397   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:58.128450   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.295020   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:58.453564   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.618222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:58.628193   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.795989   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:58.953337   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.117775   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:59.128870   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.295678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:59.452863   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.618677   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:23:59.628843   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.795519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:59.953368   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.118008   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:00.128983   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.295792   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:24:00.359149   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:24:00.452681   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.618254   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:00.628120   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.795799   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:00.952513   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.118494   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:01.129049   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.296147   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:01.454062   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.618924   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:01.629574   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.795538   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:01.953822   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.119061   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:02.128734   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.295798   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:02.453232   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.617971   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:02.628522   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.795362   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 09:24:02.859268   69327 node_ready.go:57] node "addons-768607" has "Ready":"False" status (will retry)
	I1123 09:24:02.952739   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.118988   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:03.129744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.295797   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:03.463416   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.617531   69327 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 09:24:03.617557   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:03.630608   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.796622   69327 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 09:24:03.796653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:03.860162   69327 node_ready.go:49] node "addons-768607" is "Ready"
	I1123 09:24:03.860204   69327 node_ready.go:38] duration metric: took 41.504482488s for node "addons-768607" to be "Ready" ...
	I1123 09:24:03.860224   69327 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:24:03.860304   69327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:24:03.880557   69327 api_server.go:72] duration metric: took 42.076650324s to wait for apiserver process to appear ...
	I1123 09:24:03.880589   69327 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:24:03.880622   69327 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:24:03.888208   69327 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:24:03.889956   69327 api_server.go:141] control plane version: v1.34.1
	I1123 09:24:03.890006   69327 api_server.go:131] duration metric: took 9.408531ms to wait for apiserver health ...
	I1123 09:24:03.890020   69327 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:24:03.904451   69327 system_pods.go:59] 20 kube-system pods found
	I1123 09:24:03.904502   69327 system_pods.go:61] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:03.904511   69327 system_pods.go:61] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:03.904522   69327 system_pods.go:61] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:03.904531   69327 system_pods.go:61] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:03.904540   69327 system_pods.go:61] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:03.904548   69327 system_pods.go:61] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:03.904555   69327 system_pods.go:61] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:03.904559   69327 system_pods.go:61] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:03.904564   69327 system_pods.go:61] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:03.904574   69327 system_pods.go:61] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:03.904579   69327 system_pods.go:61] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:03.904584   69327 system_pods.go:61] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:03.904592   69327 system_pods.go:61] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:03.904600   69327 system_pods.go:61] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:03.904608   69327 system_pods.go:61] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:03.904616   69327 system_pods.go:61] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:03.904624   69327 system_pods.go:61] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:03.904635   69327 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.904645   69327 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.904651   69327 system_pods.go:61] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:03.904710   69327 system_pods.go:74] duration metric: took 14.679775ms to wait for pod list to return data ...
	I1123 09:24:03.904721   69327 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:24:03.908978   69327 default_sa.go:45] found service account: "default"
	I1123 09:24:03.909020   69327 default_sa.go:55] duration metric: took 4.285307ms for default service account to be created ...
	I1123 09:24:03.909033   69327 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:24:03.998333   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.999786   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:03.999821   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:03.999828   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:03.999836   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:03.999841   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:03.999848   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:03.999854   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:03.999858   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:03.999862   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:03.999865   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:03.999870   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:03.999874   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:03.999877   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:03.999882   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:03.999890   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:03.999895   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:03.999902   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:03.999907   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:03.999916   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.999922   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:03.999927   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:03.999945   69327 retry.go:31] will retry after 225.422061ms: missing components: kube-dns
	I1123 09:24:04.119442   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:04.129332   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.231552   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:04.231595   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:04.231606   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:24:04.231617   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:04.231626   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:04.231640   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:04.231646   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:04.231654   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:04.231660   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:04.231668   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:04.231676   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:04.231682   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:04.231689   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:04.231698   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:04.231706   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:04.231717   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:04.231725   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:04.231734   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:04.231742   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.231753   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.231772   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:24:04.231796   69327 retry.go:31] will retry after 386.727357ms: missing components: kube-dns
	I1123 09:24:04.313740   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:04.454378   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:04.618533   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:04.622852   69327 system_pods.go:86] 20 kube-system pods found
	I1123 09:24:04.622891   69327 system_pods.go:89] "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:24:04.622901   69327 system_pods.go:89] "coredns-66bc5c9577-qvd9b" [4338f282-61e8-45dc-8a2a-449a8aa65f64] Running
	I1123 09:24:04.622912   69327 system_pods.go:89] "csi-hostpath-attacher-0" [7d3fe3cd-254a-497b-a24b-8d019fbf5bd6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 09:24:04.622920   69327 system_pods.go:89] "csi-hostpath-resizer-0" [544763e0-73d5-4b61-9007-c6fe9d84f20d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 09:24:04.622929   69327 system_pods.go:89] "csi-hostpathplugin-9ksmc" [394b2c7c-3431-4988-afd6-9c9f91d892b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 09:24:04.622935   69327 system_pods.go:89] "etcd-addons-768607" [3cd401e2-00f1-4f8d-a77c-136d3a2b5209] Running
	I1123 09:24:04.622944   69327 system_pods.go:89] "kindnet-tw8jx" [53e669c8-96ed-4de8-a528-3186e3a55797] Running
	I1123 09:24:04.622950   69327 system_pods.go:89] "kube-apiserver-addons-768607" [170cee0a-a920-415c-b5fc-c342107cf219] Running
	I1123 09:24:04.622955   69327 system_pods.go:89] "kube-controller-manager-addons-768607" [a03de210-0ece-464f-b0c5-ddee1361575e] Running
	I1123 09:24:04.622968   69327 system_pods.go:89] "kube-ingress-dns-minikube" [0a409b6d-8a09-46e8-bcc7-d9820885bc20] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:24:04.622976   69327 system_pods.go:89] "kube-proxy-szpms" [1858e471-3133-439c-8335-48c0a459824d] Running
	I1123 09:24:04.622982   69327 system_pods.go:89] "kube-scheduler-addons-768607" [bae6ad44-c046-4778-8875-518fd35d3427] Running
	I1123 09:24:04.622995   69327 system_pods.go:89] "metrics-server-85b7d694d7-gzdxp" [780791bf-6d1f-4a14-a71c-0f02d8863b50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:24:04.623004   69327 system_pods.go:89] "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:24:04.623019   69327 system_pods.go:89] "registry-6b586f9694-wb6sr" [de7eaafd-154b-4e12-962d-23d47c7127a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:24:04.623026   69327 system_pods.go:89] "registry-creds-764b6fb674-pf8cs" [b2b57794-0e2a-4a54-b1c1-086e0cf60915] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:24:04.623041   69327 system_pods.go:89] "registry-proxy-hvxjj" [abbb6984-3768-48ff-8d09-b43d2af51c4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:24:04.623051   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4qkfp" [a3376851-e42c-4ebf-ba15-b05621e85f4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.623062   69327 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rdc2h" [59dbb46c-b4b8-4ef7-9aba-5e1ad6b160c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:24:04.623067   69327 system_pods.go:89] "storage-provisioner" [ad9a4fd2-465c-41b7-9d68-ca6063fe0d88] Running
	I1123 09:24:04.623080   69327 system_pods.go:126] duration metric: took 714.038787ms to wait for k8s-apps to be running ...
	I1123 09:24:04.623105   69327 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:24:04.623167   69327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:24:04.629752   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.717911   69327 system_svc.go:56] duration metric: took 94.792733ms WaitForService to wait for kubelet
	I1123 09:24:04.717955   69327 kubeadm.go:587] duration metric: took 42.914053592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:24:04.717994   69327 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:24:04.721411   69327 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:24:04.721444   69327 node_conditions.go:123] node cpu capacity is 8
	I1123 09:24:04.721465   69327 node_conditions.go:105] duration metric: took 3.464303ms to run NodePressure ...
	I1123 09:24:04.721481   69327 start.go:242] waiting for startup goroutines ...
	I1123 09:24:04.797466   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:04.954009   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.119188   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:05.128758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.296384   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:05.453775   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.618820   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:05.629548   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.795649   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:05.953437   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.118887   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:06.129252   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.296572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:06.453947   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.619403   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:06.629675   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.795780   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:06.953858   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.119054   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:07.129145   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.296657   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:07.453780   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.619026   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:07.630796   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.795920   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:07.953613   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.118808   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:08.129078   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.296132   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:08.452865   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.619180   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:08.628619   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.795500   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:08.953223   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.119370   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:09.129275   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.297029   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:09.454070   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.619414   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:09.629180   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.796067   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:09.953079   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.119536   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:10.128893   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.296112   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:10.453767   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.618809   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:10.629402   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.796584   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:10.953582   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.118756   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:11.129274   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.297289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:11.453484   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.619041   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:11.719653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.795280   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:11.953112   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.118281   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:12.128303   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.296198   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:12.453209   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.619123   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:12.628566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.795333   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:12.953250   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.118314   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:13.129238   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.296944   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:13.454310   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.618236   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:13.628977   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.796598   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:13.953352   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.135577   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.135591   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:14.296155   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:14.452832   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.618812   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:14.720044   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.795841   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:14.953446   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.118126   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:15.128062   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.295985   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:15.454177   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.619221   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:15.628377   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.796289   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:15.952963   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.119444   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:16.129172   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.296062   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:16.453157   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.619217   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:16.629530   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.796049   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:16.953145   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.119287   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:17.128566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.295770   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:17.454037   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.618860   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:17.628761   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.796435   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:17.952557   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:18.118750   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:18.128804   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:18.295731   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:18.453725   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:18.620295   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:18.629972   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:18.797276   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:18.955863   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.121294   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:19.129787   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:19.311368   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:19.454126   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.693481   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:19.693699   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:19.796110   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:19.954239   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.119603   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:20.129138   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:20.296678   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:20.453732   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.618915   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:20.629405   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:20.796479   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:20.953272   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.119548   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:21.129059   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:21.296511   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:21.453342   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.618520   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:21.629514   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:21.795946   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:21.952761   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.123723   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:22.129070   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:22.296795   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:22.454127   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.619495   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:22.628750   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:22.795591   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:22.953149   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.119705   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:23.129439   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:23.296928   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:23.454059   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.618973   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:23.630366   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:23.797528   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:23.953119   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.119163   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:24.128221   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:24.297079   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:24.453837   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.620309   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:24.629256   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:24.796850   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:24.953912   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:25.118714   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:25.128825   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:25.296393   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:25.453735   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:25.619316   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:25.628756   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:25.795711   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:25.953272   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:26.118806   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:26.128832   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:26.295983   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:26.453561   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:26.618613   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:26.628680   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:26.795570   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:26.953354   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:27.118220   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:27.131727   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:27.295676   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:27.453469   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:27.618421   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:27.628787   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:27.795668   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.011480   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:28.118242   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:28.129121   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:28.296519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.453278   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:28.619999   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:28.628862   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:28.797151   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:28.952981   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:29.118815   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:29.129802   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:29.296568   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:29.453571   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:29.618502   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:29.629403   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:29.796464   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:29.952881   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:30.119061   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:30.128572   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:30.296201   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:30.474055   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:30.619175   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:30.628420   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:30.796744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:30.953536   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:31.118216   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:31.127986   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:31.295916   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:31.453471   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:31.617962   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:31.628145   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:31.796361   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:31.953451   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:32.118757   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:32.129314   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:32.296888   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:32.453349   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:32.618723   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:32.628810   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:32.795971   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:32.953676   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:33.118318   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:33.128519   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:33.295612   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:33.452686   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:33.619001   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:33.629200   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:33.796044   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:33.952179   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:34.118945   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:34.127825   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:34.296222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:34.454299   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:34.618186   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:34.629597   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:34.795694   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:34.953182   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:35.119191   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:35.128684   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:35.295579   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:35.453315   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:35.619350   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:35.628936   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:35.796677   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:35.953338   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:36.117910   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:36.129329   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:36.337737   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:36.475706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:36.618663   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:36.628813   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:36.796490   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:36.953082   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:37.119422   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:24:37.129323   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:37.295927   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:37.454706   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:37.618699   69327 kapi.go:107] duration metric: took 1m14.503570302s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 09:24:37.628709   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:37.795851   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:37.953676   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:38.129222   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:38.297055   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:38.454281   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:38.629646   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:38.796550   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:38.953236   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:39.128655   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:39.295468   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:39.453186   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:39.629125   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:39.796560   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:39.953510   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:40.129175   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:40.296727   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:40.453695   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:40.631711   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:40.796960   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:40.953940   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:41.129653   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:41.296242   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:41.454346   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:41.629415   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:41.796208   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:41.953001   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:42.129353   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:42.297225   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:42.452987   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:42.683083   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:42.796491   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:42.955412   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:43.129485   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:43.296405   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:43.453548   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:43.629530   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:43.795744   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:43.952923   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:44.128854   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:44.295919   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:44.453754   69327 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:44.629758   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:44.795702   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:44.952926   69327 kapi.go:107] duration metric: took 1m21.502975936s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 09:24:45.129391   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:45.296341   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:45.629718   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:45.796483   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:46.129023   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:46.296857   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:46.628566   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:46.795532   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:47.128798   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:47.295681   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:47.629496   69327 kapi.go:107] duration metric: took 1m17.503683612s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 09:24:47.631287   69327 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-768607 cluster.
	I1123 09:24:47.632778   69327 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 09:24:47.634151   69327 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 09:24:47.795784   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:48.296438   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:48.796199   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:49.296707   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:49.796484   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:50.296563   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:50.796232   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:51.295999   69327 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:24:51.795811   69327 kapi.go:107] duration metric: took 1m28.003326818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 09:24:51.797487   69327 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, ingress-dns, default-storageclass, yakd, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1123 09:24:51.798563   69327 addons.go:530] duration metric: took 1m29.994638779s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget ingress-dns default-storageclass yakd metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1123 09:24:51.798607   69327 start.go:247] waiting for cluster config update ...
	I1123 09:24:51.798634   69327 start.go:256] writing updated cluster config ...
	I1123 09:24:51.798933   69327 ssh_runner.go:195] Run: rm -f paused
	I1123 09:24:51.802842   69327 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:51.805813   69327 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qvd9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.809680   69327 pod_ready.go:94] pod "coredns-66bc5c9577-qvd9b" is "Ready"
	I1123 09:24:51.809702   69327 pod_ready.go:86] duration metric: took 3.869185ms for pod "coredns-66bc5c9577-qvd9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.811367   69327 pod_ready.go:83] waiting for pod "etcd-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.814970   69327 pod_ready.go:94] pod "etcd-addons-768607" is "Ready"
	I1123 09:24:51.814989   69327 pod_ready.go:86] duration metric: took 3.602809ms for pod "etcd-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.816856   69327 pod_ready.go:83] waiting for pod "kube-apiserver-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.820245   69327 pod_ready.go:94] pod "kube-apiserver-addons-768607" is "Ready"
	I1123 09:24:51.820275   69327 pod_ready.go:86] duration metric: took 3.397031ms for pod "kube-apiserver-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:51.821898   69327 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.206905   69327 pod_ready.go:94] pod "kube-controller-manager-addons-768607" is "Ready"
	I1123 09:24:52.206940   69327 pod_ready.go:86] duration metric: took 385.022394ms for pod "kube-controller-manager-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.459008   69327 pod_ready.go:83] waiting for pod "kube-proxy-szpms" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:52.820223   69327 pod_ready.go:94] pod "kube-proxy-szpms" is "Ready"
	I1123 09:24:52.820255   69327 pod_ready.go:86] duration metric: took 361.216669ms for pod "kube-proxy-szpms" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.007511   69327 pod_ready.go:83] waiting for pod "kube-scheduler-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.406358   69327 pod_ready.go:94] pod "kube-scheduler-addons-768607" is "Ready"
	I1123 09:24:53.406393   69327 pod_ready.go:86] duration metric: took 398.854316ms for pod "kube-scheduler-addons-768607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:53.406411   69327 pod_ready.go:40] duration metric: took 1.603537207s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:53.449150   69327 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:24:53.451401   69327 out.go:179] * Done! kubectl is now configured to use "addons-768607" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:24:54 addons-768607 crio[772]: time="2025-11-23T09:24:54.282222103Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=871d150a-082e-46f6-bec0-92f6382e2650 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:24:54 addons-768607 crio[772]: time="2025-11-23T09:24:54.283721607Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.265266674Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=871d150a-082e-46f6-bec0-92f6382e2650 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.265815422Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d136401b-3c18-4a09-9b65-272e10562e91 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.267162137Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=640cbd26-3825-4ade-84a6-1187e709ce47 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.27080667Z" level=info msg="Creating container: default/busybox/busybox" id=a71bd133-f0e9-450a-9e8f-a0cadd92e8e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.270933051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.27630281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.276752196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.310716489Z" level=info msg="Created container 55be9b144d3e0d6440923bb4289b75258babaf95bdd3a9401a71e0838bb24cf6: default/busybox/busybox" id=a71bd133-f0e9-450a-9e8f-a0cadd92e8e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.311303566Z" level=info msg="Starting container: 55be9b144d3e0d6440923bb4289b75258babaf95bdd3a9401a71e0838bb24cf6" id=cd08785d-f9de-4ff0-b93a-fe9fc95df71c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:24:56 addons-768607 crio[772]: time="2025-11-23T09:24:56.313080266Z" level=info msg="Started container" PID=6269 containerID=55be9b144d3e0d6440923bb4289b75258babaf95bdd3a9401a71e0838bb24cf6 description=default/busybox/busybox id=cd08785d-f9de-4ff0-b93a-fe9fc95df71c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8a3b2e53da2b03390a83fa7b2b5c9bab73dee86a36f68d05acbeed32e0f699c
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.118170144Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d/POD" id=1aa15de4-8f85-48ad-b3f2-9716d51e191a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.118271277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.126564584Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d Namespace:local-path-storage ID:ee7d5637224f915118bd0cdaaffc328609f2fc60ce5deeecdb50a7ff00160d70 UID:3fb5375e-0676-45dc-a826-bb0cb74ab32d NetNS:/var/run/netns/b78a4a22-3f7e-4128-be27-8944fee0371b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00058c778}] Aliases:map[]}"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.12660727Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.138790023Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d Namespace:local-path-storage ID:ee7d5637224f915118bd0cdaaffc328609f2fc60ce5deeecdb50a7ff00160d70 UID:3fb5375e-0676-45dc-a826-bb0cb74ab32d NetNS:/var/run/netns/b78a4a22-3f7e-4128-be27-8944fee0371b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00058c778}] Aliases:map[]}"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.13897692Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d for CNI network kindnet (type=ptp)"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.140273514Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.141393873Z" level=info msg="Ran pod sandbox ee7d5637224f915118bd0cdaaffc328609f2fc60ce5deeecdb50a7ff00160d70 with infra container: local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d/POD" id=1aa15de4-8f85-48ad-b3f2-9716d51e191a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.14296924Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=7bfef7c0-5b85-404e-90bd-fc73dd56fb82 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.14321633Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=7bfef7c0-5b85-404e-90bd-fc73dd56fb82 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.143279297Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=7bfef7c0-5b85-404e-90bd-fc73dd56fb82 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.14405578Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=d3d03bda-7316-4e4d-84f8-c4788cce7f38 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:25:03 addons-768607 crio[772]: time="2025-11-23T09:25:03.145709819Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	55be9b144d3e0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   c8a3b2e53da2b       busybox                                    default
	25a90399c1823       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	30475367013dc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	c692f2c4458f0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	231168bdacbd0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	1e22dfb32cfee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   c38c00beffd66       gcp-auth-78565c9fb4-2pvgc                  gcp-auth
	c919189c246c8       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             20 seconds ago       Running             controller                               0                   f2cf3bb76ce7e       ingress-nginx-controller-6c8bf45fb-bpzqp   ingress-nginx
	5fb1a2f531ff0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            23 seconds ago       Running             gadget                                   0                   f11a9f639780d       gadget-hp58l                               gadget
	1364a68c663de       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                26 seconds ago       Running             node-driver-registrar                    0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	57e021fa16b34       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago       Running             registry-proxy                           0                   ca1eb83b1b2dd       registry-proxy-hvxjj                       kube-system
	3be653d3906b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   432d04f46b6c6       snapshot-controller-7d9fbc56b8-rdc2h       kube-system
	910a3c428a715       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   31 seconds ago       Exited              patch                                    0                   304560316f93b       gcp-auth-certs-patch-zqhhc                 gcp-auth
	ae93f08af7cde       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     31 seconds ago       Running             amd-gpu-device-plugin                    0                   a527c1165fb87       amd-gpu-device-plugin-8vlwk                kube-system
	58c6caa5d7a2a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   b141c1fe9e009       snapshot-controller-7d9fbc56b8-4qkfp       kube-system
	021ee69331dd2       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     34 seconds ago       Running             nvidia-device-plugin-ctr                 0                   70c6e19296627       nvidia-device-plugin-daemonset-b9prj       kube-system
	59c5e7c66e383       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   38 seconds ago       Running             csi-external-health-monitor-controller   0                   5b6c39f3fc294       csi-hostpathplugin-9ksmc                   kube-system
	f4fec87683212       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              39 seconds ago       Running             csi-resizer                              0                   330ecedceef63       csi-hostpath-resizer-0                     kube-system
	6039167b575c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   39 seconds ago       Exited              patch                                    0                   a92d85b711d57       ingress-nginx-admission-patch-6r4gd        ingress-nginx
	530d43c188680       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   40 seconds ago       Exited              create                                   0                   f39efe8cafe25       ingress-nginx-admission-create-gxxrb       ingress-nginx
	0e7995d7319d3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   40 seconds ago       Exited              create                                   0                   6930b0e53a1aa       gcp-auth-certs-create-45d4c                gcp-auth
	00cf685e4f763       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               41 seconds ago       Running             minikube-ingress-dns                     0                   d57b49805add0       kube-ingress-dns-minikube                  kube-system
	6e05171fad5d4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             46 seconds ago       Running             csi-attacher                             0                   0be8d43ad1430       csi-hostpath-attacher-0                    kube-system
	256c13e134ad7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             47 seconds ago       Running             local-path-provisioner                   0                   fe1fb36c19632       local-path-provisioner-648f6765c9-txfsp    local-path-storage
	d693f68cf264b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              49 seconds ago       Running             yakd                                     0                   6be84db5a7ce8       yakd-dashboard-5ff678cb9-288kh             yakd-dashboard
	034d682ec778c       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               52 seconds ago       Running             cloud-spanner-emulator                   0                   92541d65f90ee       cloud-spanner-emulator-5bdddb765-qn9ss     default
	da035bc9e46eb       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           56 seconds ago       Running             registry                                 0                   0e616adcb24df       registry-6b586f9694-wb6sr                  kube-system
	c21acab334cad       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        59 seconds ago       Running             metrics-server                           0                   da0edeb18a21b       metrics-server-85b7d694d7-gzdxp            kube-system
	8f3fdc51b52f6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   cc03772847c09       coredns-66bc5c9577-qvd9b                   kube-system
	01d6b9bf1de88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   3a06a365102cd       storage-provisioner                        kube-system
	403102191b13c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   d32bf043aaa4c       kube-proxy-szpms                           kube-system
	d98e916f22715       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   79d8d5732cfb1       kindnet-tw8jx                              kube-system
	628b56a1e0e47       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   330aaa6f639e9       etcd-addons-768607                         kube-system
	93dfa5558a7a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   a873d54ad8caa       kube-controller-manager-addons-768607      kube-system
	b5f64ab3094a6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   268a5e52d91de       kube-apiserver-addons-768607               kube-system
	d8909c0c21553       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   3c141f6eb777e       kube-scheduler-addons-768607               kube-system
	
	
	==> coredns [8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4] <==
	[INFO] 10.244.0.18:39518 - 43598 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000167528s
	[INFO] 10.244.0.18:54999 - 48705 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126114s
	[INFO] 10.244.0.18:54999 - 48970 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176444s
	[INFO] 10.244.0.18:57693 - 32234 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000079057s
	[INFO] 10.244.0.18:57693 - 32520 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000134874s
	[INFO] 10.244.0.18:35593 - 46305 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069941s
	[INFO] 10.244.0.18:35593 - 46073 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00010499s
	[INFO] 10.244.0.18:44300 - 18193 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00005987s
	[INFO] 10.244.0.18:44300 - 18008 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000098074s
	[INFO] 10.244.0.18:37449 - 41246 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011318s
	[INFO] 10.244.0.18:37449 - 41111 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151928s
	[INFO] 10.244.0.22:34772 - 31747 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202721s
	[INFO] 10.244.0.22:54494 - 31515 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142499s
	[INFO] 10.244.0.22:35892 - 64793 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142224s
	[INFO] 10.244.0.22:58841 - 64100 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186469s
	[INFO] 10.244.0.22:46063 - 28549 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134688s
	[INFO] 10.244.0.22:50023 - 55261 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000142648s
	[INFO] 10.244.0.22:58156 - 46325 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006493188s
	[INFO] 10.244.0.22:57985 - 17948 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006975937s
	[INFO] 10.244.0.22:40092 - 21269 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004203429s
	[INFO] 10.244.0.22:53637 - 37524 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006615399s
	[INFO] 10.244.0.22:43184 - 55676 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00326154s
	[INFO] 10.244.0.22:58206 - 40599 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005204673s
	[INFO] 10.244.0.22:38273 - 29143 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000792185s
	[INFO] 10.244.0.22:54052 - 61089 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001085307s
	
	
	==> describe nodes <==
	Name:               addons-768607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-768607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=addons-768607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_23_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-768607
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-768607"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:23:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-768607
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:24:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:24:48 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:24:48 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:24:48 +0000   Sun, 23 Nov 2025 09:23:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:24:48 +0000   Sun, 23 Nov 2025 09:24:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-768607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a0f70bd2-ce4d-4b3b-948d-0689086be8f1
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-qn9ss                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  gadget                      gadget-hp58l                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  gcp-auth                    gcp-auth-78565c9fb4-2pvgc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-bpzqp                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         101s
	  kube-system                 amd-gpu-device-plugin-8vlwk                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 coredns-66bc5c9577-qvd9b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpathplugin-9ksmc                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 etcd-addons-768607                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-tw8jx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-addons-768607                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-addons-768607                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-szpms                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-addons-768607                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 metrics-server-85b7d694d7-gzdxp                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         101s
	  kube-system                 nvidia-device-plugin-daemonset-b9prj                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-6b586f9694-wb6sr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-creds-764b6fb674-pf8cs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-proxy-hvxjj                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 snapshot-controller-7d9fbc56b8-4qkfp                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 snapshot-controller-7d9fbc56b8-rdc2h                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  local-path-storage          helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-txfsp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-288kh                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node addons-768607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node addons-768607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node addons-768607 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node addons-768607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node addons-768607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node addons-768607 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s                 node-controller  Node addons-768607 event: Registered Node addons-768607 in Controller
	  Normal  NodeReady                61s                  kubelet          Node addons-768607 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001866] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.383088] i8042: Warning: Keylock active
	[  +0.012890] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.461634] block sda: the capability attribute has been deprecated.
	[  +0.078010] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021497] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276866] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3] <==
	{"level":"warn","ts":"2025-11-23T09:23:13.270913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.276868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.283267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.289059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.295793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.302280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.308778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.316207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.323241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.329097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.335664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.342397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.348518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.354522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.360509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.367893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.387501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.393490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.399107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:13.439837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:24.215747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:24.222954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.837480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.851573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:23:50.857674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [1e22dfb32cfee7e5c5ffb22b93c4741fa7b2e19a1b9343ca2f2b65b92d580467] <==
	2025/11/23 09:24:47 GCP Auth Webhook started!
	2025/11/23 09:24:53 Ready to marshal response ...
	2025/11/23 09:24:53 Ready to write response ...
	2025/11/23 09:24:53 Ready to marshal response ...
	2025/11/23 09:24:53 Ready to write response ...
	2025/11/23 09:24:54 Ready to marshal response ...
	2025/11/23 09:24:54 Ready to write response ...
	2025/11/23 09:25:02 Ready to marshal response ...
	2025/11/23 09:25:02 Ready to write response ...
	2025/11/23 09:25:02 Ready to marshal response ...
	2025/11/23 09:25:02 Ready to write response ...
	
	
	==> kernel <==
	 09:25:04 up  2:07,  0 user,  load average: 0.90, 1.43, 1.89
	Linux addons-768607 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d] <==
	I1123 09:23:23.158643       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:23:23.158676       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:23:23.158688       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:23:23.159400       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:23:53.158722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:23:53.159750       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:23:53.159756       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:23:53.181154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 09:23:54.758878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:23:54.758914       1 metrics.go:72] Registering metrics
	I1123 09:23:54.759016       1 controller.go:711] "Syncing nftables rules"
	I1123 09:24:03.164846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:03.164925       1 main.go:301] handling current node
	I1123 09:24:13.158544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:13.158617       1 main.go:301] handling current node
	I1123 09:24:23.158584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:23.158621       1 main.go:301] handling current node
	I1123 09:24:33.158408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:33.158440       1 main.go:301] handling current node
	I1123 09:24:43.158396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:43.158433       1 main.go:301] handling current node
	I1123 09:24:53.158682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:24:53.158719       1 main.go:301] handling current node
	I1123 09:25:03.158424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:25:03.158457       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99] <==
	E1123 09:24:06.346258       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 09:24:06.346694       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	E1123 09:24:06.352565       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	E1123 09:24:06.374059       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.243.125:443: connect: connection refused" logger="UnhandledError"
	W1123 09:24:07.348219       1 handler_proxy.go:99] no RequestInfo found in the context
	W1123 09:24:07.348258       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 09:24:07.348259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 09:24:07.348296       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1123 09:24:07.348340       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 09:24:07.349494       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 09:24:11.421193       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 09:24:11.421250       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 09:24:11.421327       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.243.125:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 09:24:11.430041       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 09:25:02.098590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60996: use of closed network connection
	E1123 09:25:02.258467       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32786: use of closed network connection
	
	
	==> kube-controller-manager [93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092] <==
	I1123 09:23:20.814836       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-768607"
	I1123 09:23:20.814887       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:23:20.814885       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:23:20.815319       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:23:20.815328       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:23:20.816155       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:23:20.816608       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:23:20.816699       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:23:20.816950       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:23:20.816987       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:23:20.817036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:23:20.817451       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:23:20.817829       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:23:20.820359       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:23:20.820401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:23:20.835946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 09:23:23.170039       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 09:23:50.825594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 09:23:50.825725       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 09:23:50.825783       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 09:23:50.842619       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 09:23:50.846227       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 09:23:50.926158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:23:50.947377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:24:05.818916       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be] <==
	I1123 09:23:22.943851       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:23:23.083881       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:23:23.184189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:23:23.184230       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:23:23.184336       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:23:23.277700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:23:23.277776       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:23:23.289023       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:23:23.294685       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:23:23.294714       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:23:23.298124       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:23:23.298130       1 config.go:200] "Starting service config controller"
	I1123 09:23:23.298155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:23:23.298158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:23:23.298309       1 config.go:309] "Starting node config controller"
	I1123 09:23:23.298349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:23:23.298376       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:23:23.298585       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:23:23.298664       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:23:23.399165       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:23:23.399204       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:23:23.400515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14] <==
	E1123 09:23:13.839362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:23:13.839369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:23:13.839482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:23:13.839483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:23:13.839778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:23:13.839778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:23:13.839856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:23:13.839860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:23:13.839880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:23:13.839896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:23:13.839958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:23:13.840007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:23:13.840009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:23:13.840146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:23:13.840161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:23:13.840172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:23:13.840202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:23:14.729040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:23:14.729040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:23:14.808981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:23:14.816984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:23:14.913853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:23:14.972136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:23:15.019261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1123 09:23:17.938035       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:24:33 addons-768607 kubelet[1289]: I1123 09:24:33.446808    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-8vlwk" podStartSLOduration=2.059739606 podStartE2EDuration="30.446787639s" podCreationTimestamp="2025-11-23 09:24:03 +0000 UTC" firstStartedPulling="2025-11-23 09:24:03.907835918 +0000 UTC m=+47.815341910" lastFinishedPulling="2025-11-23 09:24:32.294883934 +0000 UTC m=+76.202389943" observedRunningTime="2025-11-23 09:24:32.442232365 +0000 UTC m=+76.349738399" watchObservedRunningTime="2025-11-23 09:24:33.446787639 +0000 UTC m=+77.354293649"
	Nov 23 09:24:33 addons-768607 kubelet[1289]: I1123 09:24:33.455066    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-rdc2h" podStartSLOduration=41.689438473 podStartE2EDuration="1m10.455046232s" podCreationTimestamp="2025-11-23 09:23:23 +0000 UTC" firstStartedPulling="2025-11-23 09:24:03.911402243 +0000 UTC m=+47.818908231" lastFinishedPulling="2025-11-23 09:24:32.677010002 +0000 UTC m=+76.584515990" observedRunningTime="2025-11-23 09:24:33.454453991 +0000 UTC m=+77.361960021" watchObservedRunningTime="2025-11-23 09:24:33.455046232 +0000 UTC m=+77.362552242"
	Nov 23 09:24:34 addons-768607 kubelet[1289]: I1123 09:24:34.490761    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfbts\" (UniqueName: \"kubernetes.io/projected/26a68bc5-888b-4ede-8c02-01562d08e18e-kube-api-access-jfbts\") pod \"26a68bc5-888b-4ede-8c02-01562d08e18e\" (UID: \"26a68bc5-888b-4ede-8c02-01562d08e18e\") "
	Nov 23 09:24:34 addons-768607 kubelet[1289]: I1123 09:24:34.493179    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a68bc5-888b-4ede-8c02-01562d08e18e-kube-api-access-jfbts" (OuterVolumeSpecName: "kube-api-access-jfbts") pod "26a68bc5-888b-4ede-8c02-01562d08e18e" (UID: "26a68bc5-888b-4ede-8c02-01562d08e18e"). InnerVolumeSpecName "kube-api-access-jfbts". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 09:24:34 addons-768607 kubelet[1289]: I1123 09:24:34.591693    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfbts\" (UniqueName: \"kubernetes.io/projected/26a68bc5-888b-4ede-8c02-01562d08e18e-kube-api-access-jfbts\") on node \"addons-768607\" DevicePath \"\""
	Nov 23 09:24:35 addons-768607 kubelet[1289]: E1123 09:24:35.296692    1289 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 23 09:24:35 addons-768607 kubelet[1289]: E1123 09:24:35.296779    1289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b57794-0e2a-4a54-b1c1-086e0cf60915-gcr-creds podName:b2b57794-0e2a-4a54-b1c1-086e0cf60915 nodeName:}" failed. No retries permitted until 2025-11-23 09:25:07.296763044 +0000 UTC m=+111.204269037 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b2b57794-0e2a-4a54-b1c1-086e0cf60915-gcr-creds") pod "registry-creds-764b6fb674-pf8cs" (UID: "b2b57794-0e2a-4a54-b1c1-086e0cf60915") : secret "registry-creds-gcr" not found
	Nov 23 09:24:35 addons-768607 kubelet[1289]: I1123 09:24:35.446353    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304560316f93b9258eb71f7e16ab06a282e34b454d28dce96ec0c37754fd21c1"
	Nov 23 09:24:37 addons-768607 kubelet[1289]: I1123 09:24:37.458638    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hvxjj" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:24:37 addons-768607 kubelet[1289]: I1123 09:24:37.467402    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-hvxjj" podStartSLOduration=1.8502661809999998 podStartE2EDuration="34.467385059s" podCreationTimestamp="2025-11-23 09:24:03 +0000 UTC" firstStartedPulling="2025-11-23 09:24:03.993494697 +0000 UTC m=+47.901000685" lastFinishedPulling="2025-11-23 09:24:36.610613564 +0000 UTC m=+80.518119563" observedRunningTime="2025-11-23 09:24:37.466835918 +0000 UTC m=+81.374341930" watchObservedRunningTime="2025-11-23 09:24:37.467385059 +0000 UTC m=+81.374891067"
	Nov 23 09:24:38 addons-768607 kubelet[1289]: I1123 09:24:38.462741    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hvxjj" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:24:42 addons-768607 kubelet[1289]: I1123 09:24:42.187680    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-hp58l" podStartSLOduration=66.081037679 podStartE2EDuration="1m19.187658954s" podCreationTimestamp="2025-11-23 09:23:23 +0000 UTC" firstStartedPulling="2025-11-23 09:24:27.094424947 +0000 UTC m=+71.001930951" lastFinishedPulling="2025-11-23 09:24:40.201046196 +0000 UTC m=+84.108552226" observedRunningTime="2025-11-23 09:24:40.495751275 +0000 UTC m=+84.403257307" watchObservedRunningTime="2025-11-23 09:24:42.187658954 +0000 UTC m=+86.095164964"
	Nov 23 09:24:44 addons-768607 kubelet[1289]: I1123 09:24:44.498826    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-bpzqp" podStartSLOduration=74.186768946 podStartE2EDuration="1m21.498804303s" podCreationTimestamp="2025-11-23 09:23:23 +0000 UTC" firstStartedPulling="2025-11-23 09:24:36.401169487 +0000 UTC m=+80.308675491" lastFinishedPulling="2025-11-23 09:24:43.713204859 +0000 UTC m=+87.620710848" observedRunningTime="2025-11-23 09:24:44.497597952 +0000 UTC m=+88.405103980" watchObservedRunningTime="2025-11-23 09:24:44.498804303 +0000 UTC m=+88.406310313"
	Nov 23 09:24:47 addons-768607 kubelet[1289]: I1123 09:24:47.520395    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-2pvgc" podStartSLOduration=67.155122062 podStartE2EDuration="1m17.520368237s" podCreationTimestamp="2025-11-23 09:23:30 +0000 UTC" firstStartedPulling="2025-11-23 09:24:36.604558903 +0000 UTC m=+80.512064904" lastFinishedPulling="2025-11-23 09:24:46.96980509 +0000 UTC m=+90.877311079" observedRunningTime="2025-11-23 09:24:47.51901806 +0000 UTC m=+91.426524083" watchObservedRunningTime="2025-11-23 09:24:47.520368237 +0000 UTC m=+91.427874248"
	Nov 23 09:24:49 addons-768607 kubelet[1289]: I1123 09:24:49.242343    1289 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 23 09:24:49 addons-768607 kubelet[1289]: I1123 09:24:49.242381    1289 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 23 09:24:51 addons-768607 kubelet[1289]: I1123 09:24:51.541069    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-9ksmc" podStartSLOduration=1.952152194 podStartE2EDuration="48.541048373s" podCreationTimestamp="2025-11-23 09:24:03 +0000 UTC" firstStartedPulling="2025-11-23 09:24:03.890841753 +0000 UTC m=+47.798347756" lastFinishedPulling="2025-11-23 09:24:50.479737942 +0000 UTC m=+94.387243935" observedRunningTime="2025-11-23 09:24:51.540438953 +0000 UTC m=+95.447944993" watchObservedRunningTime="2025-11-23 09:24:51.541048373 +0000 UTC m=+95.448554383"
	Nov 23 09:24:54 addons-768607 kubelet[1289]: I1123 09:24:54.044532    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ncmg\" (UniqueName: \"kubernetes.io/projected/e9dc25fe-97c5-431f-bdc9-31e095db24ec-kube-api-access-4ncmg\") pod \"busybox\" (UID: \"e9dc25fe-97c5-431f-bdc9-31e095db24ec\") " pod="default/busybox"
	Nov 23 09:24:54 addons-768607 kubelet[1289]: I1123 09:24:54.044579    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e9dc25fe-97c5-431f-bdc9-31e095db24ec-gcp-creds\") pod \"busybox\" (UID: \"e9dc25fe-97c5-431f-bdc9-31e095db24ec\") " pod="default/busybox"
	Nov 23 09:24:56 addons-768607 kubelet[1289]: I1123 09:24:56.175438    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefcdbb5-c817-496a-a65d-533a133f5350" path="/var/lib/kubelet/pods/aefcdbb5-c817-496a-a65d-533a133f5350/volumes"
	Nov 23 09:24:56 addons-768607 kubelet[1289]: I1123 09:24:56.563362    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5786441629999999 podStartE2EDuration="3.563342872s" podCreationTimestamp="2025-11-23 09:24:53 +0000 UTC" firstStartedPulling="2025-11-23 09:24:54.281907571 +0000 UTC m=+98.189413560" lastFinishedPulling="2025-11-23 09:24:56.26660628 +0000 UTC m=+100.174112269" observedRunningTime="2025-11-23 09:24:56.562971441 +0000 UTC m=+100.470477451" watchObservedRunningTime="2025-11-23 09:24:56.563342872 +0000 UTC m=+100.470848882"
	Nov 23 09:25:02 addons-768607 kubelet[1289]: I1123 09:25:02.908777    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3fb5375e-0676-45dc-a826-bb0cb74ab32d-data\") pod \"helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d\" (UID: \"3fb5375e-0676-45dc-a826-bb0cb74ab32d\") " pod="local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d"
	Nov 23 09:25:02 addons-768607 kubelet[1289]: I1123 09:25:02.908946    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3fb5375e-0676-45dc-a826-bb0cb74ab32d-gcp-creds\") pod \"helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d\" (UID: \"3fb5375e-0676-45dc-a826-bb0cb74ab32d\") " pod="local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d"
	Nov 23 09:25:02 addons-768607 kubelet[1289]: I1123 09:25:02.909061    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3fb5375e-0676-45dc-a826-bb0cb74ab32d-script\") pod \"helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d\" (UID: \"3fb5375e-0676-45dc-a826-bb0cb74ab32d\") " pod="local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d"
	Nov 23 09:25:02 addons-768607 kubelet[1289]: I1123 09:25:02.909168    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ph5\" (UniqueName: \"kubernetes.io/projected/3fb5375e-0676-45dc-a826-bb0cb74ab32d-kube-api-access-k9ph5\") pod \"helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d\" (UID: \"3fb5375e-0676-45dc-a826-bb0cb74ab32d\") " pod="local-path-storage/helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d"
	
	
	==> storage-provisioner [01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36] <==
	W1123 09:24:38.234173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:40.237916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:40.241991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:42.245242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:42.249393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:44.252073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:44.255543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:46.259414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:46.265350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:48.268218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:48.272260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:50.275316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:50.279532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:52.282407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:52.330513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:54.333527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:54.337473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:56.340195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:56.343650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:58.346400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:24:58.351046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:25:00.353872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:25:00.358414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:25:02.361772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:25:02.366220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-768607 -n addons-768607
helpers_test.go:269: (dbg) Run:  kubectl --context addons-768607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path gcp-auth-certs-patch-zqhhc ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd registry-creds-764b6fb674-pf8cs helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-768607 describe pod test-local-path gcp-auth-certs-patch-zqhhc ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd registry-creds-764b6fb674-pf8cs helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-768607 describe pod test-local-path gcp-auth-certs-patch-zqhhc ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd registry-creds-764b6fb674-pf8cs helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d: exit status 1 (77.294038ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pjctf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-pjctf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-zqhhc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-gxxrb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6r4gd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pf8cs" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-768607 describe pod test-local-path gcp-auth-certs-patch-zqhhc ingress-nginx-admission-create-gxxrb ingress-nginx-admission-patch-6r4gd registry-creds-764b6fb674-pf8cs helper-pod-create-pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable headlamp --alsologtostderr -v=1: exit status 11 (270.143302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:05.065704   78491 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:05.066001   78491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:05.066011   78491 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:05.066015   78491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:05.066244   78491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:05.066612   78491 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:05.067061   78491 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:05.067080   78491 addons.go:622] checking whether the cluster is paused
	I1123 09:25:05.067215   78491 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:05.067236   78491 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:05.067628   78491 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:05.085634   78491 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:05.085700   78491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:05.104144   78491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:05.206034   78491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:05.206172   78491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:05.236295   78491 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:05.236324   78491 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:05.236330   78491 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:05.236335   78491 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:05.236339   78491 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:05.236342   78491 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:05.236345   78491 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:05.236348   78491 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:05.236351   78491 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:05.236365   78491 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:05.236368   78491 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:05.236370   78491 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:05.236374   78491 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:05.236376   78491 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:05.236379   78491 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:05.236384   78491 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:05.236387   78491 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:05.236391   78491 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:05.236394   78491 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:05.236396   78491 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:05.236399   78491 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:05.236402   78491 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:05.236405   78491 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:05.236416   78491 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:05.236419   78491 cri.go:89] found id: ""
	I1123 09:25:05.236460   78491 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:05.250912   78491 out.go:203] 
	W1123 09:25:05.252037   78491 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:05.252055   78491 out.go:285] * 
	* 
	W1123 09:25:05.256028   78491 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:05.257345   78491 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-qn9ss" [f66ffb0e-20d1-4210-9215-0b6e59dc3847] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003732634s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (244.529249ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:24.622266   80735 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:24.622388   80735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:24.622399   80735 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:24.622405   80735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:24.622620   80735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:24.622872   80735 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:24.623208   80735 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:24.623223   80735 addons.go:622] checking whether the cluster is paused
	I1123 09:25:24.623301   80735 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:24.623314   80735 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:24.623711   80735 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:24.640648   80735 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:24.640701   80735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:24.657775   80735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:24.758106   80735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:24.758213   80735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:24.786639   80735 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:24.786683   80735 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:24.786689   80735 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:24.786694   80735 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:24.786699   80735 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:24.786705   80735 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:24.786709   80735 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:24.786713   80735 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:24.786718   80735 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:24.786730   80735 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:24.786736   80735 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:24.786739   80735 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:24.786742   80735 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:24.786744   80735 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:24.786747   80735 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:24.786760   80735 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:24.786769   80735 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:24.786773   80735 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:24.786775   80735 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:24.786778   80735 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:24.786784   80735 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:24.786786   80735 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:24.786789   80735 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:24.786791   80735 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:24.786794   80735 cri.go:89] found id: ""
	I1123 09:25:24.786844   80735 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:24.801563   80735 out.go:203] 
	W1123 09:25:24.802705   80735 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:24.802730   80735 out.go:285] * 
	* 
	W1123 09:25:24.806705   80735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:24.807909   80735 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-768607 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-768607 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [01141c1c-2754-48b9-84f7-9cb51d8fb7e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [01141c1c-2754-48b9-84f7-9cb51d8fb7e6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [01141c1c-2754-48b9-84f7-9cb51d8fb7e6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003660175s
addons_test.go:967: (dbg) Run:  kubectl --context addons-768607 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 ssh "cat /opt/local-path-provisioner/pvc-7db7b47d-529b-4b0d-b443-2afddf2b0f1d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-768607 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-768607 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (273.420506ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:12.503150   79395 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:12.503281   79395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:12.503290   79395 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:12.503295   79395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:12.503492   79395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:12.503773   79395 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:12.504145   79395 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:12.504170   79395 addons.go:622] checking whether the cluster is paused
	I1123 09:25:12.504303   79395 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:12.504321   79395 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:12.504868   79395 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:12.523346   79395 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:12.523414   79395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:12.544350   79395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:12.651079   79395 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:12.651208   79395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:12.687364   79395 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:12.687398   79395 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:12.687405   79395 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:12.687419   79395 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:12.687424   79395 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:12.687431   79395 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:12.687435   79395 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:12.687440   79395 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:12.687444   79395 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:12.687455   79395 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:12.687465   79395 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:12.687470   79395 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:12.687476   79395 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:12.687486   79395 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:12.687492   79395 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:12.687512   79395 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:12.687524   79395 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:12.687531   79395 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:12.687536   79395 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:12.687540   79395 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:12.687547   79395 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:12.687551   79395 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:12.687555   79395 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:12.687560   79395 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:12.687564   79395 cri.go:89] found id: ""
	I1123 09:25:12.687616   79395 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:12.704116   79395 out.go:203] 
	W1123 09:25:12.705479   79395 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:12.705507   79395 out.go:285] * 
	* 
	W1123 09:25:12.709583   79395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:12.711297   79395 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b9prj" [fa027fa5-6aa4-4e97-a108-f2ce777352d5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004795009s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (261.860128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:07.588723   78639 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:07.589022   78639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:07.589039   78639 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:07.589045   78639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:07.589311   78639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:07.589595   78639 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:07.589942   78639 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:07.589962   78639 addons.go:622] checking whether the cluster is paused
	I1123 09:25:07.590071   78639 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:07.590103   78639 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:07.590506   78639 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:07.610127   78639 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:07.610205   78639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:07.628647   78639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:07.730823   78639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:07.730940   78639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:07.760801   78639 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:07.760823   78639 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:07.760827   78639 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:07.760831   78639 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:07.760834   78639 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:07.760837   78639 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:07.760840   78639 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:07.760854   78639 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:07.760857   78639 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:07.760863   78639 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:07.760866   78639 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:07.760869   78639 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:07.760871   78639 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:07.760874   78639 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:07.760876   78639 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:07.760884   78639 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:07.760890   78639 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:07.760893   78639 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:07.760896   78639 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:07.760899   78639 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:07.760902   78639 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:07.760905   78639 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:07.760907   78639 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:07.760910   78639 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:07.760912   78639 cri.go:89] found id: ""
	I1123 09:25:07.760950   78639 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:07.775781   78639 out.go:203] 
	W1123 09:25:07.777107   78639 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:07.777133   78639 out.go:285] * 
	* 
	W1123 09:25:07.781312   78639 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:07.783011   78639 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-288kh" [ec35e9c8-0757-4ae6-a8f9-b95203cc68e2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002900331s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable yakd --alsologtostderr -v=1: exit status 11 (245.218353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:28.307198   80879 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:28.307465   80879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:28.307475   80879 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:28.307478   80879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:28.307651   80879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:28.307918   80879 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:28.308254   80879 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:28.308271   80879 addons.go:622] checking whether the cluster is paused
	I1123 09:25:28.308377   80879 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:28.308390   80879 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:28.308749   80879 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:28.325909   80879 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:28.325965   80879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:28.342487   80879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:28.443251   80879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:28.443333   80879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:28.473113   80879 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:28.473176   80879 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:28.473183   80879 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:28.473188   80879 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:28.473192   80879 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:28.473199   80879 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:28.473203   80879 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:28.473206   80879 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:28.473209   80879 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:28.473220   80879 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:28.473227   80879 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:28.473231   80879 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:28.473238   80879 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:28.473241   80879 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:28.473243   80879 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:28.473251   80879 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:28.473256   80879 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:28.473261   80879 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:28.473264   80879 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:28.473266   80879 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:28.473269   80879 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:28.473272   80879 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:28.473275   80879 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:28.473278   80879 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:28.473280   80879 cri.go:89] found id: ""
	I1123 09:25:28.473328   80879 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:28.487095   80879 out.go:203] 
	W1123 09:25:28.488223   80879 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:28.488252   80879 out.go:285] * 
	* 
	W1123 09:25:28.492183   80879 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:28.493357   80879 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-8vlwk" [579f7026-b306-42b4-868b-da51bdb3aa62] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003857314s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768607 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768607 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (242.678511ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:25:23.060146   80658 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:25:23.060292   80658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:23.060301   80658 out.go:374] Setting ErrFile to fd 2...
	I1123 09:25:23.060305   80658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:25:23.060507   80658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:25:23.060764   80658 mustload.go:66] Loading cluster: addons-768607
	I1123 09:25:23.061096   80658 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:23.061110   80658 addons.go:622] checking whether the cluster is paused
	I1123 09:25:23.061190   80658 config.go:182] Loaded profile config "addons-768607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:23.061203   80658 host.go:66] Checking if "addons-768607" exists ...
	I1123 09:25:23.061568   80658 cli_runner.go:164] Run: docker container inspect addons-768607 --format={{.State.Status}}
	I1123 09:25:23.078695   80658 ssh_runner.go:195] Run: systemctl --version
	I1123 09:25:23.078767   80658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-768607
	I1123 09:25:23.095223   80658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/addons-768607/id_rsa Username:docker}
	I1123 09:25:23.194491   80658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:25:23.194579   80658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:25:23.223779   80658 cri.go:89] found id: "25a90399c18236ad6f1bd9852bf514abf0cfdc53c80ac7131707ae0c129914ea"
	I1123 09:25:23.223799   80658 cri.go:89] found id: "30475367013dc133bcf31a113a5c805e13a7ae522a2da8b1822d775743fa921d"
	I1123 09:25:23.223804   80658 cri.go:89] found id: "c692f2c4458f012e6dea37a4c5038473c8cfd52404290a9de36ee5f9dd461c33"
	I1123 09:25:23.223807   80658 cri.go:89] found id: "231168bdacbd0c44ccde524c4583dfde2507563b41dee16504f6b24aef69a685"
	I1123 09:25:23.223810   80658 cri.go:89] found id: "1364a68c663de4ec03d6c3f263b8fa435f60ed6f587a1d8c69c68574c3028a16"
	I1123 09:25:23.223815   80658 cri.go:89] found id: "57e021fa16b348dacd32b5db60bcf18618a4bd2723bead3fec97bd84820ae20d"
	I1123 09:25:23.223818   80658 cri.go:89] found id: "3be653d3906b767500336b15d61cf5636f7a5b7c372f4f4239c08bad906d64bb"
	I1123 09:25:23.223823   80658 cri.go:89] found id: "ae93f08af7cde91292f142bed64324d42e3c1e7deb6434ab827d5ecf8065d37c"
	I1123 09:25:23.223828   80658 cri.go:89] found id: "58c6caa5d7a2a89fda27f06ce40a04b27480a4b2e04bb5411861ff89abe5e146"
	I1123 09:25:23.223843   80658 cri.go:89] found id: "021ee69331dd21bf229ecd1db5d55d798fd0eee37dc0bb0b9a624c5cbccc770f"
	I1123 09:25:23.223852   80658 cri.go:89] found id: "59c5e7c66e3835f4f14bd4a82f661c738488bc6c624cc2fd13eabad0519797c8"
	I1123 09:25:23.223857   80658 cri.go:89] found id: "f4fec8768321222a9f9bf178328a43695dc72a29313975ce785004a208ca5af3"
	I1123 09:25:23.223864   80658 cri.go:89] found id: "00cf685e4f7633fdc7ff68303c67f4f43add29e8d9ccd87d21a6a088f2fdbc68"
	I1123 09:25:23.223869   80658 cri.go:89] found id: "6e05171fad5d43e018ce9c94cfb7891e9984df090d0b8adddc8122e2efd84ff6"
	I1123 09:25:23.223877   80658 cri.go:89] found id: "da035bc9e46eb83341d5eb40ca2fa703f3cea336a6a912ef4a80eeaf0a0ac076"
	I1123 09:25:23.223884   80658 cri.go:89] found id: "c21acab334cad461bd90789dbd1cf7e4a162446d76ac18a241cd3b8f9863be14"
	I1123 09:25:23.223891   80658 cri.go:89] found id: "8f3fdc51b52f6779513f36acefb86bcc8943baf18483f08bf8cce60927bd9cd4"
	I1123 09:25:23.223898   80658 cri.go:89] found id: "01d6b9bf1de88e27a372ace627c4c029fd51c26dd3f9e477e70137ecab416c36"
	I1123 09:25:23.223902   80658 cri.go:89] found id: "403102191b13c2eef45478f5af6a1ed72ff7fbdea27c8bebc65ffccf6197a3be"
	I1123 09:25:23.223905   80658 cri.go:89] found id: "d98e916f227153ff84dad39f7895deed814fbbef0272aa14546e6a49f6c7226d"
	I1123 09:25:23.223910   80658 cri.go:89] found id: "628b56a1e0e47a0532ea5375471e5d17f64b1bece8bd8004b4ed449cf90764a3"
	I1123 09:25:23.223918   80658 cri.go:89] found id: "93dfa5558a7a808c5c354787ab8eec238559016b46eb3e6825f32eb25403e092"
	I1123 09:25:23.223923   80658 cri.go:89] found id: "b5f64ab3094a653f0bd8f634e5e2cc5066d0b571ace3c66c888b4190eadc2d99"
	I1123 09:25:23.223930   80658 cri.go:89] found id: "d8909c0c21553cdb1824a36e8e2357948596cd908eaa63008f1925c3a97b4f14"
	I1123 09:25:23.223935   80658 cri.go:89] found id: ""
	I1123 09:25:23.223988   80658 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:25:23.237720   80658 out.go:203] 
	W1123 09:25:23.238786   80658 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:25:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:25:23.238803   80658 out.go:285] * 
	* 
	W1123 09:25:23.242767   80658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:25:23.243982   80658 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-768607 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-157940 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-157940 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-k7p7h" [f85f0981-b42d-4b92-b370-beb660cbaada] Pending
E1123 09:31:15.948604   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-connect-7d85dfc575-k7p7h" [f85f0981-b42d-4b92-b370-beb660cbaada] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-157940 -n functional-157940
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 09:41:14.453837502 +0000 UTC m=+1161.290938925
functional_test.go:1645: (dbg) Run:  kubectl --context functional-157940 describe po hello-node-connect-7d85dfc575-k7p7h -n default
functional_test.go:1645: (dbg) kubectl --context functional-157940 describe po hello-node-connect-7d85dfc575-k7p7h -n default:
Name:             hello-node-connect-7d85dfc575-k7p7h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-157940/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:31:46 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pws5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9pws5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m27s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-k7p7h to functional-157940
Normal   Pulling    6m19s (x5 over 9m27s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m19s (x5 over 9m22s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m19s (x5 over 9m22s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m12s (x21 over 9m21s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m12s (x21 over 9m21s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-157940 logs hello-node-connect-7d85dfc575-k7p7h -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-157940 logs hello-node-connect-7d85dfc575-k7p7h -n default: exit status 1 (64.894655ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-k7p7h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-157940 logs hello-node-connect-7d85dfc575-k7p7h -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-157940 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-k7p7h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-157940/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:31:46 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pws5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9pws5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m27s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-k7p7h to functional-157940
Normal   Pulling    6m19s (x5 over 9m27s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m19s (x5 over 9m22s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m19s (x5 over 9m22s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m12s (x21 over 9m21s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m12s (x21 over 9m21s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-157940 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-157940 logs -l app=hello-node-connect: exit status 1 (65.073897ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-k7p7h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-157940 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-157940 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.121.89
IPs:                      10.105.121.89
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30570/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-157940
helpers_test.go:243: (dbg) docker inspect functional-157940:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157",
	        "Created": "2025-11-23T09:28:48.041580529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:28:48.075850924Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157/hostname",
	        "HostsPath": "/var/lib/docker/containers/45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157/hosts",
	        "LogPath": "/var/lib/docker/containers/45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157/45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157-json.log",
	        "Name": "/functional-157940",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-157940:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-157940",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45afd1cafba8d80eff558c870a4aade25c42ab120801c53e1f45b444a1c81157",
	                "LowerDir": "/var/lib/docker/overlay2/809b622053b1a5ef52c29260d0bb78985d558887372dfae401569e1ea8162c99-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/809b622053b1a5ef52c29260d0bb78985d558887372dfae401569e1ea8162c99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/809b622053b1a5ef52c29260d0bb78985d558887372dfae401569e1ea8162c99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/809b622053b1a5ef52c29260d0bb78985d558887372dfae401569e1ea8162c99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-157940",
	                "Source": "/var/lib/docker/volumes/functional-157940/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-157940",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-157940",
	                "name.minikube.sigs.k8s.io": "functional-157940",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ec14b25fe9eef721df24c10728176b1d74f7a9b37443a62b8e2ab970f3e5bf2f",
	            "SandboxKey": "/var/run/docker/netns/ec14b25fe9ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-157940": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de2df40f53ef461d6c58a1583ddfe5ba2db76940910e81e61c842c30d9a2c59a",
	                    "EndpointID": "7d84d330811eba7b9e7a19bb783ee0f2b4763b7282f0eb770208ec3391fd1622",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "3e:49:6e:b2:35:c9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-157940",
	                        "45afd1cafba8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-157940 -n functional-157940
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 logs -n 25: (1.309692535s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-157940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │                     │
	│ image          │ functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr                                                                   │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls                                                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr                                                                   │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls                                                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr                                                                   │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls                                                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image save kicbase/echo-server:functional-157940 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image rm kicbase/echo-server:functional-157940 --alsologtostderr                                                                              │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls                                                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image save --daemon kicbase/echo-server:functional-157940 --alsologtostderr                                                                   │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ dashboard      │ --url --port 36195 -p functional-157940 --alsologtostderr -v=1                                                                                                  │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ update-context │ functional-157940 update-context --alsologtostderr -v=2                                                                                                         │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ update-context │ functional-157940 update-context --alsologtostderr -v=2                                                                                                         │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ update-context │ functional-157940 update-context --alsologtostderr -v=2                                                                                                         │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls --format short --alsologtostderr                                                                                                     │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls --format yaml --alsologtostderr                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ ssh            │ functional-157940 ssh pgrep buildkitd                                                                                                                           │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │                     │
	│ image          │ functional-157940 image ls --format json --alsologtostderr                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls --format table --alsologtostderr                                                                                                     │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image build -t localhost/my-image:functional-157940 testdata/build --alsologtostderr                                                          │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ image          │ functional-157940 image ls                                                                                                                                      │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:32 UTC │ 23 Nov 25 09:32 UTC │
	│ service        │ functional-157940 service list                                                                                                                                  │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:41 UTC │ 23 Nov 25 09:41 UTC │
	│ service        │ functional-157940 service list -o json                                                                                                                          │ functional-157940 │ jenkins │ v1.37.0 │ 23 Nov 25 09:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:32:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:32:13.004634  106306 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:13.004716  106306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:13.004720  106306 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:13.004724  106306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:13.005212  106306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:32:13.005782  106306 out.go:368] Setting JSON to false
	I1123 09:32:13.007100  106306 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8074,"bootTime":1763882259,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:32:13.007184  106306 start.go:143] virtualization: kvm guest
	I1123 09:32:13.010850  106306 out.go:179] * [functional-157940] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 09:32:13.012865  106306 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:32:13.012852  106306 notify.go:221] Checking for updates...
	I1123 09:32:13.014270  106306 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:32:13.016575  106306 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:32:13.018321  106306 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:32:13.019789  106306 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:32:13.021172  106306 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:32:13.023206  106306 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:13.024062  106306 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:32:13.053879  106306 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:32:13.054019  106306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:32:13.122835  106306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 09:32:13.110563043 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:32:13.122995  106306 docker.go:319] overlay module found
	I1123 09:32:13.125788  106306 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 09:32:13.127080  106306 start.go:309] selected driver: docker
	I1123 09:32:13.127123  106306 start.go:927] validating driver "docker" against &{Name:functional-157940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-157940 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:32:13.127240  106306 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:32:13.129240  106306 out.go:203] 
	W1123 09:32:13.130452  106306 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 09:32:13.131685  106306 out.go:203] 
	
	
	==> CRI-O <==
	Nov 23 09:32:25 functional-157940 crio[3591]: time="2025-11-23T09:32:25.915618174Z" level=info msg="Started container" PID=7573 containerID=e0e87af43418b4cde265e87c82823d22dc6d363d54c0b2460be7b40801175a3f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-74rj2/kubernetes-dashboard id=b21c158d-5b0e-4de0-8db1-c8497f6ac0b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5aa2835541dee4257eae412508cce00b6c9eeb5e5d45fed53aecfd3b65334888
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.849581702Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=7cf4e9fe-c0d9-460c-97aa-ab307aef1489 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.850213067Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=383d7c33-ea36-4954-afb7-dceeeca55bd1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.851507475Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4cb58b26-f258-458c-a3a7-c4fee4212fe8 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.851845008Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=3d995f00-b5a1-4462-90f4-9d8761abb02a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.855564217Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ccfbk/dashboard-metrics-scraper" id=4fa5bad2-d8bf-460e-a523-5d0d8e8a9aba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.855701375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.861018045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.861382138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3f0270f76a5db94a79a307ec0e7492beb74ed5d39e78e2e2301c9c709ee05d4c/merged/etc/group: no such file or directory"
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.861886409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.89093175Z" level=info msg="Created container 7dbe27629246f17da67b2015075f244e27f6a61ae2895fdee5d48057391b6d59: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ccfbk/dashboard-metrics-scraper" id=4fa5bad2-d8bf-460e-a523-5d0d8e8a9aba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.893138671Z" level=info msg="Starting container: 7dbe27629246f17da67b2015075f244e27f6a61ae2895fdee5d48057391b6d59" id=05450d38-038d-4319-ac74-1a055c1a7711 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:32:27 functional-157940 crio[3591]: time="2025-11-23T09:32:27.895483632Z" level=info msg="Started container" PID=7634 containerID=7dbe27629246f17da67b2015075f244e27f6a61ae2895fdee5d48057391b6d59 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ccfbk/dashboard-metrics-scraper id=05450d38-038d-4319-ac74-1a055c1a7711 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce1031400e60bba8af33f41165d610078d56d3d3cd8b3192542d11639c94782d
	Nov 23 09:32:32 functional-157940 crio[3591]: time="2025-11-23T09:32:32.235112833Z" level=info msg="Stopping pod sandbox: b39be57cd25320287b86dec8f78b2dc79f3884a41983f25edac1a9ed1e720a26" id=ed05fc4c-28ae-4337-bad2-937dfab947d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:32:32 functional-157940 crio[3591]: time="2025-11-23T09:32:32.235162308Z" level=info msg="Stopped pod sandbox (already stopped): b39be57cd25320287b86dec8f78b2dc79f3884a41983f25edac1a9ed1e720a26" id=ed05fc4c-28ae-4337-bad2-937dfab947d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:32:32 functional-157940 crio[3591]: time="2025-11-23T09:32:32.235478051Z" level=info msg="Removing pod sandbox: b39be57cd25320287b86dec8f78b2dc79f3884a41983f25edac1a9ed1e720a26" id=385397ee-e375-4534-bf68-2e22abd7d9c3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:32:32 functional-157940 crio[3591]: time="2025-11-23T09:32:32.238507242Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:32:32 functional-157940 crio[3591]: time="2025-11-23T09:32:32.238561365Z" level=info msg="Removed pod sandbox: b39be57cd25320287b86dec8f78b2dc79f3884a41983f25edac1a9ed1e720a26" id=385397ee-e375-4534-bf68-2e22abd7d9c3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:32:34 functional-157940 crio[3591]: time="2025-11-23T09:32:34.240693577Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2ae41820-fd64-44e7-88cf-5b8b477ebcb8 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:33:22 functional-157940 crio[3591]: time="2025-11-23T09:33:22.243019171Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d68b3627-28ad-4034-bb5d-73e5a55332db name=/runtime.v1.ImageService/PullImage
	Nov 23 09:33:28 functional-157940 crio[3591]: time="2025-11-23T09:33:28.240972189Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cdc6efd4-9755-4708-8683-390e4ce22fef name=/runtime.v1.ImageService/PullImage
	Nov 23 09:34:52 functional-157940 crio[3591]: time="2025-11-23T09:34:52.24218663Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1b2bafde-6f27-4c73-9895-cef2c633bdd8 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:34:55 functional-157940 crio[3591]: time="2025-11-23T09:34:55.241156584Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=459cf0fc-b5ce-4fb2-aee4-1bf9e88a1b69 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:37:39 functional-157940 crio[3591]: time="2025-11-23T09:37:39.240788272Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=eb3a6ed8-05d0-44a2-8a7a-79e2e545311f name=/runtime.v1.ImageService/PullImage
	Nov 23 09:37:39 functional-157940 crio[3591]: time="2025-11-23T09:37:39.241535883Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4f666a49-8497-4eba-a9e8-0adf23a684a2 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7dbe27629246f       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   8 minutes ago       Running             dashboard-metrics-scraper   0                   ce1031400e60b       dashboard-metrics-scraper-77bf4d6c4c-ccfbk   kubernetes-dashboard
	e0e87af43418b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         8 minutes ago       Running             kubernetes-dashboard        0                   5aa2835541dee       kubernetes-dashboard-855c9754f9-74rj2        kubernetes-dashboard
	9675991c806d5       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  8 minutes ago       Running             mysql                       0                   90f312255d099       mysql-5bb876957f-xb9rw                       default
	96364ae6369a9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   0866c4be56c2e       busybox-mount                                default
	fb30037fd1175       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   158595081c36b       sp-pod                                       default
	ee7d904d520fe       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   0a7c57c75b65d       nginx-svc                                    default
	145614830dc8c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   59ae20e727864       kube-controller-manager-functional-157940    kube-system
	46bd1350bb861       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              2                   b747c046269b5       kube-apiserver-functional-157940             kube-system
	52dca88ce5a73       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Exited              kube-apiserver              1                   b747c046269b5       kube-apiserver-functional-157940             kube-system
	5dc38b8ef8203       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   59ae20e727864       kube-controller-manager-functional-157940    kube-system
	d822318a207f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   6023c682f7350       etcd-functional-157940                       kube-system
	145c9d18b2b25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   20654ebc64bf9       coredns-66bc5c9577-kfqb9                     kube-system
	dee22df063269       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   408a3d323db36       storage-provisioner                          kube-system
	d1676fa9eca57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   e46d71aa8ab79       kube-proxy-7gcgg                             kube-system
	ebf78c70183ef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   da7a5290f8009       kindnet-mlmq4                                kube-system
	16eedf7015ecc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   8eafb46d63f5a       kube-scheduler-functional-157940             kube-system
	72036cb7bb3cc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   20654ebc64bf9       coredns-66bc5c9577-kfqb9                     kube-system
	c4772c5d06fc3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   408a3d323db36       storage-provisioner                          kube-system
	a9aeef8f4f215       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   da7a5290f8009       kindnet-mlmq4                                kube-system
	b77949bacb92f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   e46d71aa8ab79       kube-proxy-7gcgg                             kube-system
	edceb9b75759b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   8eafb46d63f5a       kube-scheduler-functional-157940             kube-system
	086096668db79       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   6023c682f7350       etcd-functional-157940                       kube-system
	
	
	==> coredns [145c9d18b2b25a49e9425049b29f00f6228b8e8fdd3e18f51eca4a3910ee965d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40725 - 58293 "HINFO IN 2680615184560877707.5441538975210426050. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037268318s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=484": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=484": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [72036cb7bb3ccef26c1642d4552a73d238dde5e09cfa921c5ed75914a5ed4a0c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39766 - 10439 "HINFO IN 6683629529616234879.6338993374578167575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032631414s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-157940
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-157940
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=functional-157940
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:28:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-157940
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:41:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:38:53 +0000   Sun, 23 Nov 2025 09:28:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:38:53 +0000   Sun, 23 Nov 2025 09:28:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:38:53 +0000   Sun, 23 Nov 2025 09:28:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:38:53 +0000   Sun, 23 Nov 2025 09:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-157940
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ed9c5843-ddfd-43bb-b77f-6144c337069e
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-qvtgh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-k7p7h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xb9rw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m6s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 coredns-66bc5c9577-kfqb9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-157940                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-mlmq4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-157940              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-157940     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7gcgg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-157940              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ccfbk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-74rj2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-157940 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-157940 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-157940 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-157940 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-157940 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-157940 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-157940 event: Registered Node functional-157940 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-157940 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-157940 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-157940 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-157940 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-157940 event: Registered Node functional-157940 in Controller
	
	
	==> dmesg <==
	[  +0.078010] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021497] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276866] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:25] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.037608] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023966] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +2.048049] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	
	
	==> etcd [086096668db79d9bbdae7dc81e66fdef34bb4355578b51de7dbfdae4f48ffbda] <==
	{"level":"warn","ts":"2025-11-23T09:28:58.035105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.041362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.046999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.062372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.068634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.074624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:28:58.119008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:30:12.741433Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T09:30:12.741521Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-157940","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T09:30:12.741629Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:30:19.742739Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:30:19.742827Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:30:19.742855Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T09:30:19.742988Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T09:30:19.743007Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T09:30:19.743749Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:30:19.743796Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:30:19.743870Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:30:19.743886Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T09:30:19.743828Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:30:19.743908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:30:19.745456Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T09:30:19.745521Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:30:19.745548Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T09:30:19.745555Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-157940","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d822318a207f81a2f1e01bf9f5f2b086d5a60ff03a9cc0df141d2954514aa6d3] <==
	{"level":"warn","ts":"2025-11-23T09:30:48.938928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.945117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.951031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.956791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.962636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.969595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.975848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.982083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.988717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:48.995263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.007772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.022489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.028506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.034284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:49.079593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:32:22.145972Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.936013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-11-23T09:32:22.146121Z","caller":"traceutil/trace.go:172","msg":"trace[1234563353] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:861; }","duration":"121.069367ms","start":"2025-11-23T09:32:22.025013Z","end":"2025-11-23T09:32:22.146083Z","steps":["trace[1234563353] 'range keys from in-memory index tree'  (duration: 120.790514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:32:22.146147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.113864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"warn","ts":"2025-11-23T09:32:22.145971Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.242529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:799"}
	{"level":"info","ts":"2025-11-23T09:32:22.146194Z","caller":"traceutil/trace.go:172","msg":"trace[2076799782] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:861; }","duration":"121.170751ms","start":"2025-11-23T09:32:22.025012Z","end":"2025-11-23T09:32:22.146183Z","steps":["trace[2076799782] 'range keys from in-memory index tree'  (duration: 120.946779ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:32:22.146203Z","caller":"traceutil/trace.go:172","msg":"trace[708461827] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:861; }","duration":"160.493018ms","start":"2025-11-23T09:32:21.985695Z","end":"2025-11-23T09:32:22.146188Z","steps":["trace[708461827] 'range keys from in-memory index tree'  (duration: 160.134555ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:40:48.638516Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1138}
	{"level":"info","ts":"2025-11-23T09:40:48.657629Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1138,"took":"18.773774ms","hash":360353651,"current-db-size-bytes":3444736,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-23T09:40:48.657675Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":360353651,"revision":1138,"compact-revision":-1}
	
	
	==> kernel <==
	 09:41:15 up  2:23,  0 user,  load average: 0.00, 0.18, 0.80
	Linux functional-157940 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a9aeef8f4f215b4b726cfdfb2f4efcfd5ec0553f0b72fc146dca1c52d31b06eb] <==
	I1123 09:29:07.138179       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:29:07.138455       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 09:29:07.138608       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:29:07.138623       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:29:07.138641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:29:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:29:07.339403       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:29:07.339436       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:29:07.339453       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:29:07.339977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:29:37.341248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:29:37.341258       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:29:37.341257       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:29:37.341255       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 09:29:38.640498       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:29:38.640527       1 metrics.go:72] Registering metrics
	I1123 09:29:38.640611       1 controller.go:711] "Syncing nftables rules"
	I1123 09:29:47.346795       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:29:47.346842       1 main.go:301] handling current node
	I1123 09:29:57.344194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:29:57.344224       1 main.go:301] handling current node
	I1123 09:30:07.344072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:07.344134       1 main.go:301] handling current node
	
	
	==> kindnet [ebf78c70183ef7da8a694924de4573d94a871a6e116e641e3ae12e618d400234] <==
	I1123 09:39:13.474847       1 main.go:301] handling current node
	I1123 09:39:23.483166       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:39:23.483199       1 main.go:301] handling current node
	I1123 09:39:33.474785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:39:33.474821       1 main.go:301] handling current node
	I1123 09:39:43.476999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:39:43.477060       1 main.go:301] handling current node
	I1123 09:39:53.479242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:39:53.479275       1 main.go:301] handling current node
	I1123 09:40:03.481625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:03.482010       1 main.go:301] handling current node
	I1123 09:40:13.475214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:13.475257       1 main.go:301] handling current node
	I1123 09:40:23.478716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:23.478754       1 main.go:301] handling current node
	I1123 09:40:33.481993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:33.482025       1 main.go:301] handling current node
	I1123 09:40:43.479559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:43.479594       1 main.go:301] handling current node
	I1123 09:40:53.480164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:40:53.480195       1 main.go:301] handling current node
	I1123 09:41:03.483337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:41:03.483373       1 main.go:301] handling current node
	I1123 09:41:13.479574       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:41:13.479605       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46bd1350bb8612a3e282e46d911d765c5746291441d1205e0aa5cbe462e077ed] <==
	I1123 09:30:49.565155       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:30:50.432783       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1123 09:30:50.637692       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1123 09:30:50.638781       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:30:50.642702       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:30:56.250513       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:30:57.722234       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:31:08.611882       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.249.192"}
	I1123 09:31:12.121758       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:31:12.231896       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.202.236"}
	I1123 09:31:13.167152       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.200.241"}
	I1123 09:31:14.122032       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.121.89"}
	E1123 09:31:59.145583       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37716: use of closed network connection
	E1123 09:32:06.773557       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53340: use of closed network connection
	I1123 09:32:09.050359       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.40.141"}
	I1123 09:32:21.729872       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:32:21.834230       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:32:21.842872       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:32:21.895394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.153.69"}
	I1123 09:32:21.912640       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.109.169"}
	E1123 09:32:22.251891       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45986: use of closed network connection
	E1123 09:32:23.101905       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38710: use of closed network connection
	E1123 09:32:24.483943       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38724: use of closed network connection
	E1123 09:32:27.236160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38740: use of closed network connection
	I1123 09:40:49.454099       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [52dca88ce5a73b0859b01d65c28f8e4086184049c3c47f1668914beb61c14b62] <==
	I1123 09:30:33.370700       1 options.go:263] external host was not specified, using 192.168.49.2
	I1123 09:30:33.373160       1 server.go:150] Version: v1.34.1
	I1123 09:30:33.373187       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1123 09:30:33.373507       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [145614830dc8cb960026dd8beb808413b070e36c7cbe09fa134f8264bbce1513] <==
	I1123 09:30:57.615855       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:30:57.615915       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:30:57.615942       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:30:57.616030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-157940"
	I1123 09:30:57.616079       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:30:57.616314       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:30:57.616391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:30:57.616511       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:30:57.618408       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:30:57.622553       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:30:57.622653       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:30:57.623754       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:30:57.623810       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:30:57.625057       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:30:57.627350       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:30:57.628480       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:30:57.630768       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:30:57.631956       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:30:57.635290       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 09:32:21.836493       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 09:32:21.840743       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 09:32:21.842414       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 09:32:21.846365       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 09:32:21.847665       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 09:32:21.851751       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [5dc38b8ef8203257e359be5d9e25f7cbef6cd4c6a9d7b8b920b98a8b6c7d9983] <==
	I1123 09:30:33.870444       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:30:34.324684       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1123 09:30:34.324708       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:30:34.326034       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1123 09:30:34.326034       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1123 09:30:34.326260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1123 09:30:34.326341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 09:30:44.328212       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [b77949bacb92f4a9c86875b0368d3a064f8a54482ab649f62a5d2b99643f6ef7] <==
	I1123 09:29:07.006268       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:29:07.068977       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:29:07.169158       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:29:07.169194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:29:07.169291       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:29:07.186478       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:29:07.186533       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:29:07.191632       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:29:07.192068       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:29:07.192116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:29:07.194422       1 config.go:200] "Starting service config controller"
	I1123 09:29:07.194673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:29:07.194614       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:29:07.194706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:29:07.194754       1 config.go:309] "Starting node config controller"
	I1123 09:29:07.194770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:29:07.194776       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:29:07.194799       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:29:07.194805       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:29:07.294864       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:29:07.294879       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:29:07.294902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [d1676fa9eca576b6425b0baad9db59858c84baf293dc12990469fd41159d501c] <==
	I1123 09:30:13.077267       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:30:13.154432       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:30:13.255193       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:30:13.255263       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:30:13.255414       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:30:13.275514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:30:13.275584       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:30:13.281327       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:30:13.281626       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:30:13.281642       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:30:13.282728       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:30:13.282754       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:30:13.282788       1 config.go:200] "Starting service config controller"
	I1123 09:30:13.282815       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:30:13.282813       1 config.go:309] "Starting node config controller"
	I1123 09:30:13.282841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:30:13.282818       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:30:13.282867       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:30:13.383473       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:30:13.383516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:30:13.383517       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:30:13.383530       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [16eedf7015ecc78332145e56db377ffc2b9140724c19244bf8b1a6e49f29842b] <==
	I1123 09:30:21.941124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1123 09:30:31.356424       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:30:31.358395       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:30:31.359227       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:30:31.361674       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:30:31.361731       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:30:31.361751       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:30:31.361766       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:30:31.361800       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:30:31.361829       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:30:31.361852       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:30:31.361874       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:30:31.361898       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:30:31.361928       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:30:31.361971       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:30:31.370905       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:30:31.371035       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:30:31.371122       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:30:34.485045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?resourceVersion=487\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:30:34.507598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?resourceVersion=486\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:30:38.494904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?resourceVersion=487\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:30:38.769464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?resourceVersion=486\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:30:46.829489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?resourceVersion=487\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:30:48.277384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?resourceVersion=486\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:30:49.445391       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kube-scheduler [edceb9b75759b1146898861418fba83e5bd9964586f9a62c2c397b1e863e366a] <==
	E1123 09:28:58.512729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:28:58.512831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:28:58.512873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:28:58.512954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:28:58.512953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:28:58.512970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:28:58.512982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:28:58.513003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:28:59.324403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:28:59.324421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:28:59.470294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:28:59.539070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:28:59.573175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:28:59.592288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:28:59.634660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:28:59.681919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:28:59.721274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:28:59.811314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 09:29:02.809981       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:30:12.632104       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 09:30:12.632180       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:30:12.632284       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 09:30:12.632306       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 09:30:12.632567       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 09:30:12.632613       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 09:38:35 functional-157940 kubelet[4341]: E1123 09:38:35.240761    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:38:40 functional-157940 kubelet[4341]: E1123 09:38:40.240907    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:38:50 functional-157940 kubelet[4341]: E1123 09:38:50.240397    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:38:55 functional-157940 kubelet[4341]: E1123 09:38:55.240287    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:39:04 functional-157940 kubelet[4341]: E1123 09:39:04.240985    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:39:10 functional-157940 kubelet[4341]: E1123 09:39:10.241028    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:39:17 functional-157940 kubelet[4341]: E1123 09:39:17.240056    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:39:22 functional-157940 kubelet[4341]: E1123 09:39:22.241523    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:39:28 functional-157940 kubelet[4341]: E1123 09:39:28.240805    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:39:33 functional-157940 kubelet[4341]: E1123 09:39:33.240492    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:39:39 functional-157940 kubelet[4341]: E1123 09:39:39.240402    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:39:48 functional-157940 kubelet[4341]: E1123 09:39:48.240631    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:39:52 functional-157940 kubelet[4341]: E1123 09:39:52.241073    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:40:03 functional-157940 kubelet[4341]: E1123 09:40:03.240617    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:40:04 functional-157940 kubelet[4341]: E1123 09:40:04.240509    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:40:17 functional-157940 kubelet[4341]: E1123 09:40:17.240813    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:40:19 functional-157940 kubelet[4341]: E1123 09:40:19.240628    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:40:28 functional-157940 kubelet[4341]: E1123 09:40:28.240474    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:40:32 functional-157940 kubelet[4341]: E1123 09:40:32.240875    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:40:39 functional-157940 kubelet[4341]: E1123 09:40:39.240053    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:40:43 functional-157940 kubelet[4341]: E1123 09:40:43.240921    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:40:52 functional-157940 kubelet[4341]: E1123 09:40:52.241105    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	Nov 23 09:40:54 functional-157940 kubelet[4341]: E1123 09:40:54.240361    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:41:06 functional-157940 kubelet[4341]: E1123 09:41:06.240583    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-k7p7h" podUID="f85f0981-b42d-4b92-b370-beb660cbaada"
	Nov 23 09:41:06 functional-157940 kubelet[4341]: E1123 09:41:06.240693    4341 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qvtgh" podUID="87cee158-6afd-4059-8a26-c4fe067f4bce"
	
	
	==> kubernetes-dashboard [e0e87af43418b4cde265e87c82823d22dc6d363d54c0b2460be7b40801175a3f] <==
	2025/11/23 09:32:25 Starting overwatch
	2025/11/23 09:32:25 Using namespace: kubernetes-dashboard
	2025/11/23 09:32:25 Using in-cluster config to connect to apiserver
	2025/11/23 09:32:25 Using secret token for csrf signing
	2025/11/23 09:32:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:32:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:32:25 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:32:25 Generating JWE encryption key
	2025/11/23 09:32:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:32:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:32:26 Initializing JWE encryption key from synchronized object
	2025/11/23 09:32:26 Creating in-cluster Sidecar client
	2025/11/23 09:32:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:32:26 Serving insecurely on HTTP port: 9090
	2025/11/23 09:32:56 Successful request to sidecar
	
	
	==> storage-provisioner [c4772c5d06fc3ef3b79f5c4272d5ba8393e0098d97b84e391c15fb2bf3c25b7b] <==
	I1123 09:29:47.940031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-157940_3c41e267-8baa-439f-987e-207186b06906!
	W1123 09:29:49.847824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:49.853410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:51.855959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:51.859126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:53.861717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:53.865871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:55.869708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:55.874811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:57.877472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:57.881122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:59.883575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:29:59.887269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:01.891149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:01.895160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:03.898400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:03.902372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:05.906067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:05.910299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:07.913828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:07.917994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:09.921759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:09.926029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:11.929816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:30:11.933896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dee22df063269abb84885917b52feb783c6435656ca09028b4e0fc0ebf639af5] <==
	W1123 09:40:51.315837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:53.318828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:53.322318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:55.325161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:55.329973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:57.334341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:57.338709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:59.341714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:40:59.345930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:01.349130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:01.353009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:03.356651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:03.361614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:05.364543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:05.368748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:07.372330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:07.376256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:09.379615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:09.384205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:11.387342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:11.391269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:13.394786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:13.398279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:15.401735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:41:15.405847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-157940 -n functional-157940
helpers_test.go:269: (dbg) Run:  kubectl --context functional-157940 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-qvtgh hello-node-connect-7d85dfc575-k7p7h
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-157940 describe pod busybox-mount hello-node-75c85bcc94-qvtgh hello-node-connect-7d85dfc575-k7p7h
helpers_test.go:290: (dbg) kubectl --context functional-157940 describe pod busybox-mount hello-node-75c85bcc94-qvtgh hello-node-connect-7d85dfc575-k7p7h:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-157940/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:32:01 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://96364ae6369a926f20ac268e8d9e274e1bc2d63eddafc4e4c9c6018f7ef56d99
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 23 Nov 2025 09:32:03 +0000
	      Finished:     Sun, 23 Nov 2025 09:32:03 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpvpk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lpvpk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-157940
	  Normal  Pulling    9m15s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m13s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.079s (2.079s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m13s  kubelet            Created container: mount-munger
	  Normal  Started    9m13s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-qvtgh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-157940/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:31:46 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.3
	IPs:
	  IP:           10.244.0.3
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxfh2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qxfh2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m30s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qvtgh to functional-157940
	  Normal   Pulling    6m24s (x5 over 9m29s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m24s (x5 over 9m29s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m24s (x5 over 9m29s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m18s (x21 over 9m29s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m18s (x21 over 9m29s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-k7p7h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-157940/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:31:46 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pws5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9pws5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m30s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-k7p7h to functional-157940
	  Normal   Pulling    6m21s (x5 over 9m29s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m21s (x5 over 9m24s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m21s (x5 over 9m24s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m14s (x21 over 9m23s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m14s (x21 over 9m23s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-157940 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-157940 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qvtgh" [87cee158-6afd-4059-8a26-c4fe067f4bce] Pending
helpers_test.go:352: "hello-node-75c85bcc94-qvtgh" [87cee158-6afd-4059-8a26-c4fe067f4bce] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-157940 -n functional-157940
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 09:41:12.574448376 +0000 UTC m=+1159.411549798
functional_test.go:1460: (dbg) Run:  kubectl --context functional-157940 describe po hello-node-75c85bcc94-qvtgh -n default
functional_test.go:1460: (dbg) kubectl --context functional-157940 describe po hello-node-75c85bcc94-qvtgh -n default:
Name:             hello-node-75c85bcc94-qvtgh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-157940/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:31:46 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.3
IPs:
IP:           10.244.0.3
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxfh2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qxfh2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m25s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qvtgh to functional-157940
Normal   Pulling    6m20s (x5 over 9m25s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m20s (x5 over 9m25s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m20s (x5 over 9m25s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m14s (x21 over 9m25s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m14s (x21 over 9m25s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-157940 logs hello-node-75c85bcc94-qvtgh -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-157940 logs hello-node-75c85bcc94-qvtgh -n default: exit status 1 (65.648054ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-qvtgh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-157940 logs hello-node-75c85bcc94-qvtgh -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr: (1.144072952s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-157940" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-157940" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-157940
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image load --daemon kicbase/echo-server:functional-157940 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-157940" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image save kicbase/echo-server:functional-157940 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 09:32:20.272747  107451 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:20.273032  107451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:20.273044  107451 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:20.273048  107451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:20.273322  107451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:32:20.273934  107451 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:20.274033  107451 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:20.274494  107451 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
	I1123 09:32:20.294332  107451 ssh_runner.go:195] Run: systemctl --version
	I1123 09:32:20.294381  107451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
	I1123 09:32:20.311899  107451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
	I1123 09:32:20.411577  107451 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1123 09:32:20.411657  107451 cache_images.go:255] Failed to load cached images for "functional-157940": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1123 09:32:20.411678  107451 cache_images.go:267] failed pushing to: functional-157940

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-157940
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image save --daemon kicbase/echo-server:functional-157940 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-157940
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-157940: exit status 1 (16.515401ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-157940

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-157940

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 service --namespace=default --https --url hello-node: exit status 115 (559.223472ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30284
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-157940 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 service hello-node --url --format={{.IP}}: exit status 115 (559.550157ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-157940 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 service hello-node --url: exit status 115 (539.167301ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30284
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-157940 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30284
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-066750 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-066750 --output=json --user=testUser: exit status 80 (2.3160958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6892b63d-83eb-422c-912d-4b97df84aba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-066750 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"3ff65c31-2c3d-4e28-ae41-2ca7acba86fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T09:51:18Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"a39d8143-d2cb-4fc7-80df-2340311ec59d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-066750 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.36s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-066750 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-066750 --output=json --user=testUser: exit status 80 (1.357541317s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"208ce6dd-cfe8-4e8f-ba65-a40c6d98bb27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-066750 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"47983147-7dda-44fd-ad4e-6e62d4b4a3b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T09:51:19Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b904011c-7154-424b-8e3d-d808e839ab88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-066750 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.36s)

                                                
                                    
x
+
TestPreload (439.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-954233 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-954233 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (49.153879678s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-954233 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-954233 image pull gcr.io/k8s-minikube/busybox: (2.263897749s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-954233
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-954233: (5.90670648s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-954233 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1123 10:01:12.239004   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:02:35.306540   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:02:57.077272   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:04:54.009806   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:06:12.237486   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-954233 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m18.044464032s)

                                                
                                                
-- stdout --
	* [test-preload-954233] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-954233" primary control-plane node in "test-preload-954233" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:01:08.946551  228826 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:01:08.946825  228826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:08.946836  228826 out.go:374] Setting ErrFile to fd 2...
	I1123 10:01:08.946840  228826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:08.947042  228826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:01:08.947494  228826 out.go:368] Setting JSON to false
	I1123 10:01:08.948586  228826 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9810,"bootTime":1763882259,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:01:08.948643  228826 start.go:143] virtualization: kvm guest
	I1123 10:01:08.950422  228826 out.go:179] * [test-preload-954233] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:01:08.951590  228826 notify.go:221] Checking for updates...
	I1123 10:01:08.951608  228826 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:01:08.952754  228826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:01:08.953802  228826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:08.954881  228826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:01:08.955840  228826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:01:08.956746  228826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:01:08.958071  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:08.959498  228826 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 10:01:08.960345  228826 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:01:08.983122  228826 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:01:08.983241  228826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:01:09.043469  228826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 10:01:09.031921931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:01:09.043585  228826 docker.go:319] overlay module found
	I1123 10:01:09.045295  228826 out.go:179] * Using the docker driver based on existing profile
	I1123 10:01:09.046467  228826 start.go:309] selected driver: docker
	I1123 10:01:09.046481  228826 start.go:927] validating driver "docker" against &{Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:09.046562  228826 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:01:09.047144  228826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:01:09.107217  228826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 10:01:09.096567011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:01:09.107479  228826 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:01:09.107511  228826 cni.go:84] Creating CNI manager for ""
	I1123 10:01:09.107572  228826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:01:09.107619  228826 start.go:353] cluster config:
	{Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:09.109862  228826 out.go:179] * Starting "test-preload-954233" primary control-plane node in "test-preload-954233" cluster
	I1123 10:01:09.110807  228826 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:01:09.111915  228826 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:01:09.112788  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:09.112907  228826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:01:09.133532  228826 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:01:09.133553  228826 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:01:09.526757  228826 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:01:09.526809  228826 cache.go:65] Caching tarball of preloaded images
	I1123 10:01:09.527030  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:09.528620  228826 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1123 10:01:09.529599  228826 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 10:01:09.645135  228826 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1123 10:01:09.645183  228826 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:01:20.192990  228826 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1123 10:01:20.193178  228826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/config.json ...
	I1123 10:01:20.194235  228826 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:01:20.194297  228826 start.go:360] acquireMachinesLock for test-preload-954233: {Name:mkfd90ede73cd4bfbc6cf04937116c96f9dbe4ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:01:20.194366  228826 start.go:364] duration metric: took 43.871µs to acquireMachinesLock for "test-preload-954233"
	I1123 10:01:20.194382  228826 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:01:20.194387  228826 fix.go:54] fixHost starting: 
	I1123 10:01:20.194642  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:20.211038  228826 fix.go:112] recreateIfNeeded on test-preload-954233: state=Stopped err=<nil>
	W1123 10:01:20.211108  228826 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:01:20.212878  228826 out.go:252] * Restarting existing docker container for "test-preload-954233" ...
	I1123 10:01:20.212948  228826 cli_runner.go:164] Run: docker start test-preload-954233
	I1123 10:01:20.475249  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:20.494943  228826 kic.go:430] container "test-preload-954233" state is running.
	I1123 10:01:20.495354  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:20.514025  228826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/config.json ...
	I1123 10:01:20.514293  228826 machine.go:94] provisionDockerMachine start ...
	I1123 10:01:20.514382  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:20.533529  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:20.533871  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:20.533887  228826 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:01:20.534542  228826 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51358->127.0.0.1:32958: read: connection reset by peer
	I1123 10:01:23.678901  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-954233
	
	I1123 10:01:23.678946  228826 ubuntu.go:182] provisioning hostname "test-preload-954233"
	I1123 10:01:23.679005  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:23.696025  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:23.696269  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:23.696284  228826 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-954233 && echo "test-preload-954233" | sudo tee /etc/hostname
	I1123 10:01:23.846319  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-954233
	
	I1123 10:01:23.846418  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:23.863785  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:23.864011  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:23.864026  228826 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-954233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-954233/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-954233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:01:24.004600  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:01:24.004631  228826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:01:24.004653  228826 ubuntu.go:190] setting up certificates
	I1123 10:01:24.004665  228826 provision.go:84] configureAuth start
	I1123 10:01:24.004721  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:24.022193  228826 provision.go:143] copyHostCerts
	I1123 10:01:24.022250  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:01:24.022271  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:01:24.022340  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:01:24.022453  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:01:24.022466  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:01:24.022496  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:01:24.022564  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:01:24.022572  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:01:24.022596  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:01:24.022668  228826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.test-preload-954233 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-954233]
	I1123 10:01:24.089180  228826 provision.go:177] copyRemoteCerts
	I1123 10:01:24.089245  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:01:24.089284  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.106405  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.207698  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:01:24.224329  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:01:24.240705  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:01:24.256757  228826 provision.go:87] duration metric: took 252.077049ms to configureAuth
	I1123 10:01:24.256785  228826 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:01:24.256959  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:24.257103  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.273786  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:24.273999  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:24.274017  228826 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:01:24.575066  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:01:24.575114  228826 machine.go:97] duration metric: took 4.060801733s to provisionDockerMachine
	I1123 10:01:24.575131  228826 start.go:293] postStartSetup for "test-preload-954233" (driver="docker")
	I1123 10:01:24.575146  228826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:01:24.575240  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:01:24.575301  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.592340  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.692131  228826 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:01:24.695543  228826 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:01:24.695578  228826 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:01:24.695590  228826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:01:24.695643  228826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:01:24.695741  228826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:01:24.695856  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:01:24.702924  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:01:24.719542  228826 start.go:296] duration metric: took 144.393811ms for postStartSetup
	I1123 10:01:24.719616  228826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:01:24.719675  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.736421  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.833187  228826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:01:24.837712  228826 fix.go:56] duration metric: took 4.643315335s for fixHost
	I1123 10:01:24.837742  228826 start.go:83] releasing machines lock for "test-preload-954233", held for 4.643363694s
	I1123 10:01:24.837819  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:24.854663  228826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:01:24.854707  228826 ssh_runner.go:195] Run: cat /version.json
	I1123 10:01:24.854765  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.854766  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.873067  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.873342  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.970629  228826 ssh_runner.go:195] Run: systemctl --version
	I1123 10:01:25.025854  228826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:01:25.059631  228826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:01:25.064526  228826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:01:25.064609  228826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:01:25.072540  228826 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:01:25.072567  228826 start.go:496] detecting cgroup driver to use...
	I1123 10:01:25.072608  228826 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:01:25.072656  228826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:01:25.086659  228826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:01:25.098713  228826 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:01:25.098771  228826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:01:25.112393  228826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:01:25.123876  228826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:01:25.199750  228826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:01:25.277399  228826 docker.go:234] disabling docker service ...
	I1123 10:01:25.277460  228826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:01:25.290972  228826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:01:25.302589  228826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:01:25.376925  228826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:01:25.458188  228826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:01:25.470378  228826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:01:25.484076  228826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1123 10:01:25.484151  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.492781  228826 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:01:25.492834  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.501148  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.509341  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.517576  228826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:01:25.525445  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.533781  228826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.541759  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.549991  228826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:01:25.556940  228826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:01:25.563786  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:25.640652  228826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:01:25.768119  228826 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:01:25.768192  228826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:01:25.772038  228826 start.go:564] Will wait 60s for crictl version
	I1123 10:01:25.772117  228826 ssh_runner.go:195] Run: which crictl
	I1123 10:01:25.775436  228826 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:01:25.799027  228826 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:01:25.799132  228826 ssh_runner.go:195] Run: crio --version
	I1123 10:01:25.826060  228826 ssh_runner.go:195] Run: crio --version
	I1123 10:01:25.854596  228826 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.2 ...
	I1123 10:01:25.855762  228826 cli_runner.go:164] Run: docker network inspect test-preload-954233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:01:25.872454  228826 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:01:25.876418  228826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:01:25.886208  228826 kubeadm.go:884] updating cluster {Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:01:25.886364  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:25.886423  228826 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:01:25.918201  228826 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:01:25.918228  228826 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:01:25.918289  228826 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:01:25.941477  228826 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:01:25.941499  228826 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:01:25.941506  228826 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1123 10:01:25.941603  228826 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-954233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:01:25.941662  228826 ssh_runner.go:195] Run: crio config
	I1123 10:01:25.985348  228826 cni.go:84] Creating CNI manager for ""
	I1123 10:01:25.985367  228826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:01:25.985383  228826 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:01:25.985407  228826 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-954233 NodeName:test-preload-954233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:01:25.985535  228826 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-954233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:01:25.985598  228826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1123 10:01:25.993448  228826 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:01:25.993504  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:01:26.000723  228826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 10:01:26.012590  228826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:01:26.024287  228826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:01:26.036199  228826 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:01:26.039641  228826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:01:26.048774  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:26.130157  228826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:01:26.152208  228826 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233 for IP: 192.168.76.2
	I1123 10:01:26.152234  228826 certs.go:195] generating shared ca certs ...
	I1123 10:01:26.152255  228826 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.152419  228826 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:01:26.152473  228826 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:01:26.152488  228826 certs.go:257] generating profile certs ...
	I1123 10:01:26.152611  228826 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key
	I1123 10:01:26.152690  228826 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.key.76194393
	I1123 10:01:26.152748  228826 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.key
	I1123 10:01:26.152873  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:01:26.152917  228826 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:01:26.152932  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:01:26.152967  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:01:26.153003  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:01:26.153045  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:01:26.153118  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:01:26.153759  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:01:26.171611  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:01:26.190488  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:01:26.208106  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:01:26.229851  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:01:26.249309  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:01:26.265932  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:01:26.282236  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:01:26.298337  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:01:26.314402  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:01:26.330245  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:01:26.347279  228826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:01:26.358953  228826 ssh_runner.go:195] Run: openssl version
	I1123 10:01:26.364736  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:01:26.372431  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.375795  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.375849  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.409472  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:01:26.416735  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:01:26.424951  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.429451  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.429502  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.462659  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:01:26.470843  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:01:26.478938  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.482454  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.482511  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.515969  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:01:26.523572  228826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:01:26.527170  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:01:26.560797  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:01:26.594467  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:01:26.627990  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:01:26.670906  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:01:26.718635  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:01:26.757374  228826 kubeadm.go:401] StartCluster: {Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:26.757477  228826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:01:26.757545  228826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:01:26.784929  228826 cri.go:89] found id: ""
	I1123 10:01:26.785023  228826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:01:26.792966  228826 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:01:26.792985  228826 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:01:26.793047  228826 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:01:26.800243  228826 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:01:26.800657  228826 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-954233" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:26.800759  228826 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-954233" cluster setting kubeconfig missing "test-preload-954233" context setting]
	I1123 10:01:26.801005  228826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.801514  228826 kapi.go:59] client config for test-preload-954233: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:01:26.801900  228826 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 10:01:26.801912  228826 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 10:01:26.801917  228826 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 10:01:26.801921  228826 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 10:01:26.801924  228826 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 10:01:26.802311  228826 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:01:26.809528  228826 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:01:26.809561  228826 kubeadm.go:602] duration metric: took 16.568414ms to restartPrimaryControlPlane
	I1123 10:01:26.809572  228826 kubeadm.go:403] duration metric: took 52.211722ms to StartCluster
	I1123 10:01:26.809588  228826 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.809653  228826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:26.810348  228826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.810589  228826 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:01:26.810663  228826 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:01:26.810754  228826 addons.go:70] Setting storage-provisioner=true in profile "test-preload-954233"
	I1123 10:01:26.810773  228826 addons.go:239] Setting addon storage-provisioner=true in "test-preload-954233"
	I1123 10:01:26.810771  228826 addons.go:70] Setting default-storageclass=true in profile "test-preload-954233"
	W1123 10:01:26.810785  228826 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:01:26.810798  228826 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-954233"
	I1123 10:01:26.810812  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:26.810819  228826 host.go:66] Checking if "test-preload-954233" exists ...
	I1123 10:01:26.811054  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.811221  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.813451  228826 out.go:179] * Verifying Kubernetes components...
	I1123 10:01:26.814635  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:26.829699  228826 kapi.go:59] client config for test-preload-954233: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:01:26.830042  228826 addons.go:239] Setting addon default-storageclass=true in "test-preload-954233"
	W1123 10:01:26.830061  228826 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:01:26.830106  228826 host.go:66] Checking if "test-preload-954233" exists ...
	I1123 10:01:26.830452  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.831506  228826 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:01:26.832728  228826 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:01:26.832751  228826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:01:26.832818  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:26.857250  228826 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:01:26.857299  228826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:01:26.857393  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:26.858440  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:26.877257  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:26.912766  228826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:01:26.925309  228826 node_ready.go:35] waiting up to 6m0s for node "test-preload-954233" to be "Ready" ...
	I1123 10:01:26.965100  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:01:26.983944  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.024468  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.024517  228826 retry.go:31] will retry after 142.659682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:27.041628  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.041670  228826 retry.go:31] will retry after 292.996358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.167925  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:27.220263  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.220301  228826 retry.go:31] will retry after 215.762042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.335533  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.388922  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.388960  228826 retry.go:31] will retry after 369.458599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.436676  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:27.488943  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.488981  228826 retry.go:31] will retry after 636.360205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.759408  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.811979  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.812029  228826 retry.go:31] will retry after 440.629206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.125960  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:28.179054  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.179113  228826 retry.go:31] will retry after 1.187761769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.253310  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:28.305112  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.305145  228826 retry.go:31] will retry after 935.627035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:28.926864  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:29.241475  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:29.293890  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.293921  228826 retry.go:31] will retry after 678.405185ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.367073  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:29.419724  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.419757  228826 retry.go:31] will retry after 1.390495148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.973429  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:30.026867  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.026902  228826 retry.go:31] will retry after 1.351740782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.811293  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:30.866515  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.866548  228826 retry.go:31] will retry after 1.183229714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:31.379748  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:31.426738  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:31.433549  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:31.433584  228826 retry.go:31] will retry after 3.413292561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:32.050923  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:32.105472  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:32.105512  228826 retry.go:31] will retry after 4.123353459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:33.926624  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:34.847225  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:34.900761  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:34.900795  228826 retry.go:31] will retry after 5.118384238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:36.229498  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:36.283171  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:36.283207  228826 retry.go:31] will retry after 5.373739885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:36.426809  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:38.926136  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:40.020211  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:40.074626  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:40.074671  228826 retry.go:31] will retry after 8.964633846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:40.926325  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:41.657845  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:41.711303  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:41.711336  228826 retry.go:31] will retry after 7.008244151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:43.426224  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:45.426671  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:47.926175  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:48.720710  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:48.774610  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:48.774647  228826 retry.go:31] will retry after 13.437674867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:49.039960  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:49.093577  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:49.093610  228826 retry.go:31] will retry after 8.541757259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:49.926654  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:52.426033  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:54.426458  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:56.925958  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:57.636397  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:57.689271  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:57.689389  228826 retry.go:31] will retry after 7.860599543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:58.926250  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:00.926812  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:02.213477  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:02.269102  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:02.269137  228826 retry.go:31] will retry after 20.4403832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:03.426069  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:05.550318  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:02:05.605048  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:05.605118  228826 retry.go:31] will retry after 17.363651277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:05.925913  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:07.926609  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:10.426274  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:12.426710  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:14.926143  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:16.926700  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:19.426540  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:21.926061  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:22.710570  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:22.764777  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:22.764820  228826 retry.go:31] will retry after 18.160227899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:22.969795  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:02:23.024639  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:23.024682  228826 retry.go:31] will retry after 39.446597275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:23.926919  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:26.426470  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:28.426790  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:30.426861  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:32.926584  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:34.926817  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:37.426443  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:39.926645  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:40.925328  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:40.978804  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:40.978841  228826 retry.go:31] will retry after 44.410730442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:41.926919  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:44.426248  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:46.426679  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:48.926632  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:50.926719  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:53.426626  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:55.426716  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:57.426757  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:59.926720  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:02.426680  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:03:02.471941  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:03:02.528064  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:03:02.528237  228826 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1123 10:03:04.926782  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:07.426040  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:09.426610  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:11.925927  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:13.926351  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:16.426713  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:18.926016  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:21.425915  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:23.426775  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:03:25.390578  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:03:25.426825  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:25.444620  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:03:25.444792  228826 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1123 10:03:25.446628  228826 out.go:179] * Enabled addons: 
	I1123 10:03:25.447635  228826 addons.go:530] duration metric: took 1m58.636982783s for enable addons: enabled=[]
	W1123 10:03:27.926825  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:30.426593  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:32.926019  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:35.425867  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:37.426899  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:39.926816  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:41.926903  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:44.426057  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:46.925822  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:48.925882  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:50.926798  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:52.926863  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:55.426166  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:57.925997  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:00.426895  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:02.426946  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:04.926022  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:06.926220  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:08.926762  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:11.426886  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:13.926932  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:16.426662  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:18.426841  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:20.426897  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:22.926750  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:25.426775  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:27.426830  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:29.925872  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:31.926827  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:34.426004  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:36.426746  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:38.426935  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:40.926883  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:43.426642  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:45.926607  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:47.926842  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:49.926905  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:52.426896  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:54.926880  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:57.425877  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:59.425955  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:01.925996  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:04.426200  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:06.926482  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:08.926710  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:11.425937  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:13.926985  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:16.426458  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:18.426868  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:20.426964  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:22.925878  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:24.926696  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:26.926862  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:29.425949  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:31.426704  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:33.926728  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:35.926931  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:38.425941  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:40.426635  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:42.926712  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:44.926886  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:47.426924  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:49.926833  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:52.426814  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:54.925940  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:57.425869  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:59.426765  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:01.426860  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:03.925939  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:06.426949  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:08.926788  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:10.926843  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:13.426831  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:15.926757  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:18.426726  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:20.426786  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:22.926785  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:25.426679  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:27.926543  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:29.926643  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:32.426640  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:34.426685  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:36.926628  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:39.426783  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:41.926700  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:44.426866  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:46.926828  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:49.426832  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:51.926763  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:54.426883  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:56.926863  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:59.426810  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:01.926790  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:03.926852  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:06.426667  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:08.426899  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:10.926206  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:13.425947  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:15.426151  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:17.926845  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:20.426546  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:22.926068  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:24.926426  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:07:26.925595  228826 node_ready.go:38] duration metric: took 6m0.000244772s for node "test-preload-954233" to be "Ready" ...
	I1123 10:07:26.927457  228826 out.go:203] 
	W1123 10:07:26.928455  228826 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1123 10:07:26.928468  228826 out.go:285] * 
	* 
	W1123 10:07:26.930067  228826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:07:26.930962  228826 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-amd64 start -p test-preload-954233 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:615: *** TestPreload FAILED at 2025-11-23 10:07:26.968408502 +0000 UTC m=+2733.805509931
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-954233
helpers_test.go:243: (dbg) docker inspect test-preload-954233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561",
	        "Created": "2025-11-23T10:00:12.446686558Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:01:20.235414069Z",
	            "FinishedAt": "2025-11-23T10:01:08.533274175Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561/hostname",
	        "HostsPath": "/var/lib/docker/containers/932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561/hosts",
	        "LogPath": "/var/lib/docker/containers/932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561/932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561-json.log",
	        "Name": "/test-preload-954233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-954233:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-954233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "932ee4751510482ff11f985ba041af1ca805ef49047b3cad0d6a583af3faf561",
	                "LowerDir": "/var/lib/docker/overlay2/6449d0bab330e97fa9c026f4048994f96a2d9b2ece2fc2599168c80b01a095ea-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6449d0bab330e97fa9c026f4048994f96a2d9b2ece2fc2599168c80b01a095ea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6449d0bab330e97fa9c026f4048994f96a2d9b2ece2fc2599168c80b01a095ea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6449d0bab330e97fa9c026f4048994f96a2d9b2ece2fc2599168c80b01a095ea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-954233",
	                "Source": "/var/lib/docker/volumes/test-preload-954233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-954233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-954233",
	                "name.minikube.sigs.k8s.io": "test-preload-954233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "87b26b222bd1770e69e341d84ba35297259dace82d9648994daa0fb03b5c1d45",
	            "SandboxKey": "/var/run/docker/netns/87b26b222bd1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ]
	            },
	            "Networks": {
	                "test-preload-954233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "07f7ef2551f54d3cf7163d1b0bd6b2902106209fbc2e1e303ed7d4478d8fb6fe",
	                    "EndpointID": "a15ec04cbbda358d10489bce56adc3350fa1db8c0e6a230ce88c85d7a2d57ef1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "f6:72:a8:e3:62:09",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-954233",
	                        "932ee4751510"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-954233 -n test-preload-954233
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-954233 -n test-preload-954233: exit status 2 (302.259374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-954233 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-891772 cp multinode-891772-m03:/home/docker/cp-test.txt multinode-891772:/home/docker/cp-test_multinode-891772-m03_multinode-891772.txt         │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ ssh     │ multinode-891772 ssh -n multinode-891772-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ ssh     │ multinode-891772 ssh -n multinode-891772 sudo cat /home/docker/cp-test_multinode-891772-m03_multinode-891772.txt                                          │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ cp      │ multinode-891772 cp multinode-891772-m03:/home/docker/cp-test.txt multinode-891772-m02:/home/docker/cp-test_multinode-891772-m03_multinode-891772-m02.txt │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ ssh     │ multinode-891772 ssh -n multinode-891772-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ ssh     │ multinode-891772 ssh -n multinode-891772-m02 sudo cat /home/docker/cp-test_multinode-891772-m03_multinode-891772-m02.txt                                  │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ node    │ multinode-891772 node stop m03                                                                                                                            │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ node    │ multinode-891772 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:56 UTC │
	│ node    │ list -p multinode-891772                                                                                                                                  │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │                     │
	│ stop    │ -p multinode-891772                                                                                                                                       │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:56 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p multinode-891772 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:58 UTC │
	│ node    │ list -p multinode-891772                                                                                                                                  │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ node    │ multinode-891772 node delete m03                                                                                                                          │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ stop    │ multinode-891772 stop                                                                                                                                     │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p multinode-891772 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:59 UTC │
	│ node    │ list -p multinode-891772                                                                                                                                  │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 09:59 UTC │                     │
	│ start   │ -p multinode-891772-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-891772-m02 │ jenkins │ v1.37.0 │ 23 Nov 25 09:59 UTC │                     │
	│ start   │ -p multinode-891772-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-891772-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 09:59 UTC │ 23 Nov 25 10:00 UTC │
	│ node    │ add -p multinode-891772                                                                                                                                   │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 10:00 UTC │                     │
	│ delete  │ -p multinode-891772-m03                                                                                                                                   │ multinode-891772-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 10:00 UTC │ 23 Nov 25 10:00 UTC │
	│ delete  │ -p multinode-891772                                                                                                                                       │ multinode-891772     │ jenkins │ v1.37.0 │ 23 Nov 25 10:00 UTC │ 23 Nov 25 10:00 UTC │
	│ start   │ -p test-preload-954233 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-954233  │ jenkins │ v1.37.0 │ 23 Nov 25 10:00 UTC │ 23 Nov 25 10:01 UTC │
	│ image   │ test-preload-954233 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-954233  │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ stop    │ -p test-preload-954233                                                                                                                                    │ test-preload-954233  │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ start   │ -p test-preload-954233 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-954233  │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:01:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:01:08.946551  228826 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:01:08.946825  228826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:08.946836  228826 out.go:374] Setting ErrFile to fd 2...
	I1123 10:01:08.946840  228826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:08.947042  228826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:01:08.947494  228826 out.go:368] Setting JSON to false
	I1123 10:01:08.948586  228826 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9810,"bootTime":1763882259,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:01:08.948643  228826 start.go:143] virtualization: kvm guest
	I1123 10:01:08.950422  228826 out.go:179] * [test-preload-954233] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:01:08.951590  228826 notify.go:221] Checking for updates...
	I1123 10:01:08.951608  228826 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:01:08.952754  228826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:01:08.953802  228826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:08.954881  228826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:01:08.955840  228826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:01:08.956746  228826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:01:08.958071  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:08.959498  228826 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 10:01:08.960345  228826 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:01:08.983122  228826 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:01:08.983241  228826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:01:09.043469  228826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 10:01:09.031921931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:01:09.043585  228826 docker.go:319] overlay module found
	I1123 10:01:09.045295  228826 out.go:179] * Using the docker driver based on existing profile
	I1123 10:01:09.046467  228826 start.go:309] selected driver: docker
	I1123 10:01:09.046481  228826 start.go:927] validating driver "docker" against &{Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:09.046562  228826 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:01:09.047144  228826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:01:09.107217  228826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 10:01:09.096567011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:01:09.107479  228826 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:01:09.107511  228826 cni.go:84] Creating CNI manager for ""
	I1123 10:01:09.107572  228826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:01:09.107619  228826 start.go:353] cluster config:
	{Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:09.109862  228826 out.go:179] * Starting "test-preload-954233" primary control-plane node in "test-preload-954233" cluster
	I1123 10:01:09.110807  228826 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:01:09.111915  228826 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:01:09.112788  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:09.112907  228826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:01:09.133532  228826 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:01:09.133553  228826 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:01:09.526757  228826 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:01:09.526809  228826 cache.go:65] Caching tarball of preloaded images
	I1123 10:01:09.527030  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:09.528620  228826 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1123 10:01:09.529599  228826 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 10:01:09.645135  228826 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1123 10:01:09.645183  228826 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:01:20.192990  228826 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1123 10:01:20.193178  228826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/config.json ...
	I1123 10:01:20.194235  228826 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:01:20.194297  228826 start.go:360] acquireMachinesLock for test-preload-954233: {Name:mkfd90ede73cd4bfbc6cf04937116c96f9dbe4ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:01:20.194366  228826 start.go:364] duration metric: took 43.871µs to acquireMachinesLock for "test-preload-954233"
	I1123 10:01:20.194382  228826 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:01:20.194387  228826 fix.go:54] fixHost starting: 
	I1123 10:01:20.194642  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:20.211038  228826 fix.go:112] recreateIfNeeded on test-preload-954233: state=Stopped err=<nil>
	W1123 10:01:20.211108  228826 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:01:20.212878  228826 out.go:252] * Restarting existing docker container for "test-preload-954233" ...
	I1123 10:01:20.212948  228826 cli_runner.go:164] Run: docker start test-preload-954233
	I1123 10:01:20.475249  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:20.494943  228826 kic.go:430] container "test-preload-954233" state is running.
	I1123 10:01:20.495354  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:20.514025  228826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/config.json ...
	I1123 10:01:20.514293  228826 machine.go:94] provisionDockerMachine start ...
	I1123 10:01:20.514382  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:20.533529  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:20.533871  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:20.533887  228826 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:01:20.534542  228826 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51358->127.0.0.1:32958: read: connection reset by peer
	I1123 10:01:23.678901  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-954233
	
	I1123 10:01:23.678946  228826 ubuntu.go:182] provisioning hostname "test-preload-954233"
	I1123 10:01:23.679005  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:23.696025  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:23.696269  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:23.696284  228826 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-954233 && echo "test-preload-954233" | sudo tee /etc/hostname
	I1123 10:01:23.846319  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-954233
	
	I1123 10:01:23.846418  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:23.863785  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:23.864011  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:23.864026  228826 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-954233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-954233/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-954233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:01:24.004600  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:01:24.004631  228826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:01:24.004653  228826 ubuntu.go:190] setting up certificates
	I1123 10:01:24.004665  228826 provision.go:84] configureAuth start
	I1123 10:01:24.004721  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:24.022193  228826 provision.go:143] copyHostCerts
	I1123 10:01:24.022250  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:01:24.022271  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:01:24.022340  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:01:24.022453  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:01:24.022466  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:01:24.022496  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:01:24.022564  228826 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:01:24.022572  228826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:01:24.022596  228826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:01:24.022668  228826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.test-preload-954233 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-954233]
	I1123 10:01:24.089180  228826 provision.go:177] copyRemoteCerts
	I1123 10:01:24.089245  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:01:24.089284  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.106405  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.207698  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:01:24.224329  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:01:24.240705  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:01:24.256757  228826 provision.go:87] duration metric: took 252.077049ms to configureAuth
	I1123 10:01:24.256785  228826 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:01:24.256959  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:24.257103  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.273786  228826 main.go:143] libmachine: Using SSH client type: native
	I1123 10:01:24.273999  228826 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1123 10:01:24.274017  228826 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:01:24.575066  228826 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:01:24.575114  228826 machine.go:97] duration metric: took 4.060801733s to provisionDockerMachine
	I1123 10:01:24.575131  228826 start.go:293] postStartSetup for "test-preload-954233" (driver="docker")
	I1123 10:01:24.575146  228826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:01:24.575240  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:01:24.575301  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.592340  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.692131  228826 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:01:24.695543  228826 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:01:24.695578  228826 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:01:24.695590  228826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:01:24.695643  228826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:01:24.695741  228826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:01:24.695856  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:01:24.702924  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:01:24.719542  228826 start.go:296] duration metric: took 144.393811ms for postStartSetup
	I1123 10:01:24.719616  228826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:01:24.719675  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.736421  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.833187  228826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:01:24.837712  228826 fix.go:56] duration metric: took 4.643315335s for fixHost
	I1123 10:01:24.837742  228826 start.go:83] releasing machines lock for "test-preload-954233", held for 4.643363694s
	I1123 10:01:24.837819  228826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-954233
	I1123 10:01:24.854663  228826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:01:24.854707  228826 ssh_runner.go:195] Run: cat /version.json
	I1123 10:01:24.854765  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.854766  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:24.873067  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.873342  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:24.970629  228826 ssh_runner.go:195] Run: systemctl --version
	I1123 10:01:25.025854  228826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:01:25.059631  228826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:01:25.064526  228826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:01:25.064609  228826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:01:25.072540  228826 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:01:25.072567  228826 start.go:496] detecting cgroup driver to use...
	I1123 10:01:25.072608  228826 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:01:25.072656  228826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:01:25.086659  228826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:01:25.098713  228826 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:01:25.098771  228826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:01:25.112393  228826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:01:25.123876  228826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:01:25.199750  228826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:01:25.277399  228826 docker.go:234] disabling docker service ...
	I1123 10:01:25.277460  228826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:01:25.290972  228826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:01:25.302589  228826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:01:25.376925  228826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:01:25.458188  228826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:01:25.470378  228826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:01:25.484076  228826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1123 10:01:25.484151  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.492781  228826 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:01:25.492834  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.501148  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.509341  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.517576  228826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:01:25.525445  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.533781  228826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.541759  228826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:01:25.549991  228826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:01:25.556940  228826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:01:25.563786  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:25.640652  228826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:01:25.768119  228826 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:01:25.768192  228826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:01:25.772038  228826 start.go:564] Will wait 60s for crictl version
	I1123 10:01:25.772117  228826 ssh_runner.go:195] Run: which crictl
	I1123 10:01:25.775436  228826 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:01:25.799027  228826 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:01:25.799132  228826 ssh_runner.go:195] Run: crio --version
	I1123 10:01:25.826060  228826 ssh_runner.go:195] Run: crio --version
	I1123 10:01:25.854596  228826 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.2 ...
	I1123 10:01:25.855762  228826 cli_runner.go:164] Run: docker network inspect test-preload-954233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:01:25.872454  228826 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:01:25.876418  228826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:01:25.886208  228826 kubeadm.go:884] updating cluster {Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:01:25.886364  228826 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:01:25.886423  228826 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:01:25.918201  228826 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:01:25.918228  228826 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:01:25.918289  228826 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:01:25.941477  228826 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:01:25.941499  228826 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:01:25.941506  228826 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1123 10:01:25.941603  228826 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-954233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:01:25.941662  228826 ssh_runner.go:195] Run: crio config
	I1123 10:01:25.985348  228826 cni.go:84] Creating CNI manager for ""
	I1123 10:01:25.985367  228826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:01:25.985383  228826 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:01:25.985407  228826 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-954233 NodeName:test-preload-954233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:01:25.985535  228826 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-954233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:01:25.985598  228826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1123 10:01:25.993448  228826 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:01:25.993504  228826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:01:26.000723  228826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 10:01:26.012590  228826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:01:26.024287  228826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:01:26.036199  228826 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:01:26.039641  228826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:01:26.048774  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:26.130157  228826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:01:26.152208  228826 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233 for IP: 192.168.76.2
	I1123 10:01:26.152234  228826 certs.go:195] generating shared ca certs ...
	I1123 10:01:26.152255  228826 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.152419  228826 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:01:26.152473  228826 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:01:26.152488  228826 certs.go:257] generating profile certs ...
	I1123 10:01:26.152611  228826 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key
	I1123 10:01:26.152690  228826 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.key.76194393
	I1123 10:01:26.152748  228826 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.key
	I1123 10:01:26.152873  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:01:26.152917  228826 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:01:26.152932  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:01:26.152967  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:01:26.153003  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:01:26.153045  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:01:26.153118  228826 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:01:26.153759  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:01:26.171611  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:01:26.190488  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:01:26.208106  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:01:26.229851  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:01:26.249309  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:01:26.265932  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:01:26.282236  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:01:26.298337  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:01:26.314402  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:01:26.330245  228826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:01:26.347279  228826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:01:26.358953  228826 ssh_runner.go:195] Run: openssl version
	I1123 10:01:26.364736  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:01:26.372431  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.375795  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.375849  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:01:26.409472  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:01:26.416735  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:01:26.424951  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.429451  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.429502  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:01:26.462659  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:01:26.470843  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:01:26.478938  228826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.482454  228826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.482511  228826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:01:26.515969  228826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:01:26.523572  228826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:01:26.527170  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:01:26.560797  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:01:26.594467  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:01:26.627990  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:01:26.670906  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:01:26.718635  228826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:01:26.757374  228826 kubeadm.go:401] StartCluster: {Name:test-preload-954233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-954233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:01:26.757477  228826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:01:26.757545  228826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:01:26.784929  228826 cri.go:89] found id: ""
	I1123 10:01:26.785023  228826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:01:26.792966  228826 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:01:26.792985  228826 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:01:26.793047  228826 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:01:26.800243  228826 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:01:26.800657  228826 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-954233" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:26.800759  228826 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-954233" cluster setting kubeconfig missing "test-preload-954233" context setting]
	I1123 10:01:26.801005  228826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.801514  228826 kapi.go:59] client config for test-preload-954233: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:01:26.801900  228826 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 10:01:26.801912  228826 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 10:01:26.801917  228826 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 10:01:26.801921  228826 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 10:01:26.801924  228826 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 10:01:26.802311  228826 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:01:26.809528  228826 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:01:26.809561  228826 kubeadm.go:602] duration metric: took 16.568414ms to restartPrimaryControlPlane
	I1123 10:01:26.809572  228826 kubeadm.go:403] duration metric: took 52.211722ms to StartCluster
	I1123 10:01:26.809588  228826 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.809653  228826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:01:26.810348  228826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:01:26.810589  228826 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:01:26.810663  228826 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:01:26.810754  228826 addons.go:70] Setting storage-provisioner=true in profile "test-preload-954233"
	I1123 10:01:26.810773  228826 addons.go:239] Setting addon storage-provisioner=true in "test-preload-954233"
	I1123 10:01:26.810771  228826 addons.go:70] Setting default-storageclass=true in profile "test-preload-954233"
	W1123 10:01:26.810785  228826 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:01:26.810798  228826 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-954233"
	I1123 10:01:26.810812  228826 config.go:182] Loaded profile config "test-preload-954233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:01:26.810819  228826 host.go:66] Checking if "test-preload-954233" exists ...
	I1123 10:01:26.811054  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.811221  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.813451  228826 out.go:179] * Verifying Kubernetes components...
	I1123 10:01:26.814635  228826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:01:26.829699  228826 kapi.go:59] client config for test-preload-954233: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/test-preload-954233/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:01:26.830042  228826 addons.go:239] Setting addon default-storageclass=true in "test-preload-954233"
	W1123 10:01:26.830061  228826 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:01:26.830106  228826 host.go:66] Checking if "test-preload-954233" exists ...
	I1123 10:01:26.830452  228826 cli_runner.go:164] Run: docker container inspect test-preload-954233 --format={{.State.Status}}
	I1123 10:01:26.831506  228826 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:01:26.832728  228826 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:01:26.832751  228826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:01:26.832818  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:26.857250  228826 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:01:26.857299  228826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:01:26.857393  228826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-954233
	I1123 10:01:26.858440  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:26.877257  228826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/test-preload-954233/id_rsa Username:docker}
	I1123 10:01:26.912766  228826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:01:26.925309  228826 node_ready.go:35] waiting up to 6m0s for node "test-preload-954233" to be "Ready" ...
	I1123 10:01:26.965100  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:01:26.983944  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.024468  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.024517  228826 retry.go:31] will retry after 142.659682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:27.041628  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.041670  228826 retry.go:31] will retry after 292.996358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.167925  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:27.220263  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.220301  228826 retry.go:31] will retry after 215.762042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.335533  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.388922  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.388960  228826 retry.go:31] will retry after 369.458599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.436676  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:27.488943  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.488981  228826 retry.go:31] will retry after 636.360205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.759408  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:27.811979  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:27.812029  228826 retry.go:31] will retry after 440.629206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.125960  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:28.179054  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.179113  228826 retry.go:31] will retry after 1.187761769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.253310  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:28.305112  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:28.305145  228826 retry.go:31] will retry after 935.627035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:28.926864  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:29.241475  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:29.293890  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.293921  228826 retry.go:31] will retry after 678.405185ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.367073  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:29.419724  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.419757  228826 retry.go:31] will retry after 1.390495148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:29.973429  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:30.026867  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.026902  228826 retry.go:31] will retry after 1.351740782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.811293  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:30.866515  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:30.866548  228826 retry.go:31] will retry after 1.183229714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:31.379748  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:31.426738  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:31.433549  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:31.433584  228826 retry.go:31] will retry after 3.413292561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:32.050923  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:32.105472  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:32.105512  228826 retry.go:31] will retry after 4.123353459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:33.926624  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:34.847225  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:34.900761  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:34.900795  228826 retry.go:31] will retry after 5.118384238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:36.229498  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:36.283171  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:36.283207  228826 retry.go:31] will retry after 5.373739885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:36.426809  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:38.926136  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:40.020211  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:40.074626  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:40.074671  228826 retry.go:31] will retry after 8.964633846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:40.926325  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:41.657845  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:41.711303  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:41.711336  228826 retry.go:31] will retry after 7.008244151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:43.426224  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:45.426671  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:47.926175  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:48.720710  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:01:48.774610  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:48.774647  228826 retry.go:31] will retry after 13.437674867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:49.039960  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:49.093577  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:49.093610  228826 retry.go:31] will retry after 8.541757259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:49.926654  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:52.426033  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:54.426458  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:01:56.925958  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:01:57.636397  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:01:57.689271  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:01:57.689389  228826 retry.go:31] will retry after 7.860599543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:01:58.926250  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:00.926812  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:02.213477  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:02.269102  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:02.269137  228826 retry.go:31] will retry after 20.4403832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:03.426069  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:05.550318  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:02:05.605048  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:05.605118  228826 retry.go:31] will retry after 17.363651277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:05.925913  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:07.926609  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:10.426274  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:12.426710  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:14.926143  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:16.926700  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:19.426540  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:21.926061  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:22.710570  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:22.764777  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:22.764820  228826 retry.go:31] will retry after 18.160227899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:22.969795  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:02:23.024639  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:23.024682  228826 retry.go:31] will retry after 39.446597275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:23.926919  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:26.426470  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:28.426790  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:30.426861  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:32.926584  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:34.926817  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:37.426443  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:39.926645  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:02:40.925328  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:02:40.978804  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1123 10:02:40.978841  228826 retry.go:31] will retry after 44.410730442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:02:41.926919  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:44.426248  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:46.426679  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:48.926632  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:50.926719  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:53.426626  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:55.426716  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:57.426757  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:02:59.926720  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:02.426680  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:03:02.471941  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1123 10:03:02.528064  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:03:02.528237  228826 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1123 10:03:04.926782  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:07.426040  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:09.426610  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:11.925927  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:13.926351  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:16.426713  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:18.926016  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:21.425915  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:23.426775  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:03:25.390578  228826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1123 10:03:25.426825  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:25.444620  228826 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1123 10:03:25.444792  228826 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1123 10:03:25.446628  228826 out.go:179] * Enabled addons: 
	I1123 10:03:25.447635  228826 addons.go:530] duration metric: took 1m58.636982783s for enable addons: enabled=[]
	W1123 10:03:27.926825  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:30.426593  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:32.926019  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:35.425867  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:37.426899  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:39.926816  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:41.926903  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:44.426057  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:46.925822  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:48.925882  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:50.926798  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:52.926863  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:55.426166  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:03:57.925997  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:00.426895  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:02.426946  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:04.926022  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:06.926220  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:08.926762  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:11.426886  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:13.926932  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:16.426662  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:18.426841  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:20.426897  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:22.926750  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:25.426775  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:27.426830  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:29.925872  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:31.926827  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:34.426004  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:36.426746  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:38.426935  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:40.926883  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:43.426642  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:45.926607  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:47.926842  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:49.926905  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:52.426896  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:54.926880  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:57.425877  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:04:59.425955  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:01.925996  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:04.426200  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:06.926482  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:08.926710  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:11.425937  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:13.926985  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:16.426458  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:18.426868  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:20.426964  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:22.925878  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:24.926696  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:26.926862  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:29.425949  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:31.426704  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:33.926728  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:35.926931  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:38.425941  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:40.426635  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:42.926712  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:44.926886  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:47.426924  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:49.926833  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:52.426814  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:54.925940  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:57.425869  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:05:59.426765  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:01.426860  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:03.925939  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:06.426949  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:08.926788  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:10.926843  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:13.426831  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:15.926757  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:18.426726  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:20.426786  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:22.926785  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:25.426679  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:27.926543  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:29.926643  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:32.426640  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:34.426685  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:36.926628  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:39.426783  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:41.926700  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:44.426866  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:46.926828  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:49.426832  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:51.926763  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:54.426883  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:56.926863  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:06:59.426810  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:01.926790  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:03.926852  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:06.426667  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:08.426899  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:10.926206  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:13.425947  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:15.426151  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:17.926845  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:20.426546  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:22.926068  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	W1123 10:07:24.926426  228826 node_ready.go:55] error getting node "test-preload-954233" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-954233": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 10:07:26.925595  228826 node_ready.go:38] duration metric: took 6m0.000244772s for node "test-preload-954233" to be "Ready" ...
	I1123 10:07:26.927457  228826 out.go:203] 
	W1123 10:07:26.928455  228826 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1123 10:07:26.928468  228826 out.go:285] * 
	W1123 10:07:26.930067  228826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:07:26.930962  228826 out.go:203] 
	
	
	==> CRI-O <==
	Nov 23 10:02:52 test-preload-954233 crio[553]: time="2025-11-23T10:02:52.246276225Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/13d4b0e0f4f66374462b570f3cebdee348eecbb8a5296dfe803c05402d319d96/merged\": directory not empty" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.492840755Z" level=info msg="createCtr: deleting container 97b6bf18b3b194c50c1b741661045bd6bf074a916236bc64a0cbde99ea98c019 from storage" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.493178488Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bcdc9fafb402d7e252a65e42cc8113031099acab924ddfe640c6c81b1b911bdd/merged\": directory not empty" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.493914575Z" level=info msg="createCtr: deleting container 2d831a5134e2703ec472727971fee68aa56f3b5dd4dba8c2566a222fb91b4db4 from storage" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.49414274Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/65cdfa4f9b348fecddc8712300c0e776acf802e8a5ff2e84a88abe38dfb2a6fb/merged\": directory not empty" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.495052054Z" level=info msg="createCtr: deleting container 7cbff02514f21d280cb093e81c22fff28948c1a3e79791e419a66affd331b193 from storage" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.495212361Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/3d4753fc6ec97238b0fc7323443c314a9af6ae4269446ce7d6defeb0974e69d4/merged\": directory not empty" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.496148586Z" level=info msg="createCtr: deleting container e7c4a9a2aede81571f1163ebf42be89a0b00a1a4c585b1423e11c9f2ab12b61c from storage" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:03:35 test-preload-954233 crio[553]: time="2025-11-23T10:03:35.496346951Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/13d4b0e0f4f66374462b570f3cebdee348eecbb8a5296dfe803c05402d319d96/merged\": directory not empty" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.366920683Z" level=info msg="createCtr: deleting container 97b6bf18b3b194c50c1b741661045bd6bf074a916236bc64a0cbde99ea98c019 from storage" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.367279369Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bcdc9fafb402d7e252a65e42cc8113031099acab924ddfe640c6c81b1b911bdd/merged\": directory not empty" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.368059257Z" level=info msg="createCtr: deleting container 2d831a5134e2703ec472727971fee68aa56f3b5dd4dba8c2566a222fb91b4db4 from storage" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.368285299Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/65cdfa4f9b348fecddc8712300c0e776acf802e8a5ff2e84a88abe38dfb2a6fb/merged\": directory not empty" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.369187185Z" level=info msg="createCtr: deleting container 7cbff02514f21d280cb093e81c22fff28948c1a3e79791e419a66affd331b193 from storage" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.369366078Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/3d4753fc6ec97238b0fc7323443c314a9af6ae4269446ce7d6defeb0974e69d4/merged\": directory not empty" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.37029144Z" level=info msg="createCtr: deleting container e7c4a9a2aede81571f1163ebf42be89a0b00a1a4c585b1423e11c9f2ab12b61c from storage" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:04:40 test-preload-954233 crio[553]: time="2025-11-23T10:04:40.370470248Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/13d4b0e0f4f66374462b570f3cebdee348eecbb8a5296dfe803c05402d319d96/merged\": directory not empty" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.677510796Z" level=info msg="createCtr: deleting container 97b6bf18b3b194c50c1b741661045bd6bf074a916236bc64a0cbde99ea98c019 from storage" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.677815854Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bcdc9fafb402d7e252a65e42cc8113031099acab924ddfe640c6c81b1b911bdd/merged\": directory not empty" id=43b9bce0-551e-4b02-91df-61927439e2d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.678622925Z" level=info msg="createCtr: deleting container 2d831a5134e2703ec472727971fee68aa56f3b5dd4dba8c2566a222fb91b4db4 from storage" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.678797262Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/65cdfa4f9b348fecddc8712300c0e776acf802e8a5ff2e84a88abe38dfb2a6fb/merged\": directory not empty" id=3bc6ca9b-f845-4daf-b6b7-4560d43dba5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.67972203Z" level=info msg="createCtr: deleting container 7cbff02514f21d280cb093e81c22fff28948c1a3e79791e419a66affd331b193 from storage" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.67988109Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/3d4753fc6ec97238b0fc7323443c314a9af6ae4269446ce7d6defeb0974e69d4/merged\": directory not empty" id=7f64e6dd-82f4-43dc-9430-8fb5bc1c95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.68084924Z" level=info msg="createCtr: deleting container e7c4a9a2aede81571f1163ebf42be89a0b00a1a4c585b1423e11c9f2ab12b61c from storage" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:06:17 test-preload-954233 crio[553]: time="2025-11-23T10:06:17.681150686Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/13d4b0e0f4f66374462b570f3cebdee348eecbb8a5296dfe803c05402d319d96/merged\": directory not empty" id=28433e8a-2229-4010-bcde-1f74dcc6cef8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov23 09:25] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.037608] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023966] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +2.048049] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	
	
	==> kernel <==
	 10:07:27 up  2:49,  0 user,  load average: 0.02, 0.31, 0.75
	Linux test-preload-954233 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 23 10:06:57 test-preload-954233 kubelet[716]: E1123 10:06:57.480058     716 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-954233.187a9a81c5eb3841  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-954233,UID:test-preload-954233,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-954233 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-954233,},FirstTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,LastTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-954233,}"
	Nov 23 10:06:59 test-preload-954233 kubelet[716]: W1123 10:06:59.598151     716 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 23 10:06:59 test-preload-954233 kubelet[716]: E1123 10:06:59.598241     716 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 23 10:07:00 test-preload-954233 kubelet[716]: E1123 10:07:00.877964     716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-954233?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 23 10:07:01 test-preload-954233 kubelet[716]: I1123 10:07:01.047573     716 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-954233"
	Nov 23 10:07:01 test-preload-954233 kubelet[716]: E1123 10:07:01.048000     716 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-954233"
	Nov 23 10:07:06 test-preload-954233 kubelet[716]: E1123 10:07:06.253512     716 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-954233\" not found"
	Nov 23 10:07:07 test-preload-954233 kubelet[716]: E1123 10:07:07.481738     716 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-954233.187a9a81c5eb3841  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-954233,UID:test-preload-954233,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-954233 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-954233,},FirstTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,LastTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-954233,}"
	Nov 23 10:07:07 test-preload-954233 kubelet[716]: E1123 10:07:07.879313     716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-954233?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 23 10:07:08 test-preload-954233 kubelet[716]: I1123 10:07:08.049307     716 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-954233"
	Nov 23 10:07:08 test-preload-954233 kubelet[716]: E1123 10:07:08.049680     716 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-954233"
	Nov 23 10:07:14 test-preload-954233 kubelet[716]: W1123 10:07:14.068423     716 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 23 10:07:14 test-preload-954233 kubelet[716]: E1123 10:07:14.068517     716 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 23 10:07:14 test-preload-954233 kubelet[716]: E1123 10:07:14.879939     716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-954233?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 23 10:07:15 test-preload-954233 kubelet[716]: I1123 10:07:15.051312     716 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-954233"
	Nov 23 10:07:15 test-preload-954233 kubelet[716]: E1123 10:07:15.051774     716 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-954233"
	Nov 23 10:07:16 test-preload-954233 kubelet[716]: E1123 10:07:16.254388     716 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-954233\" not found"
	Nov 23 10:07:17 test-preload-954233 kubelet[716]: E1123 10:07:17.483403     716 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-954233.187a9a81c5eb3841  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-954233,UID:test-preload-954233,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-954233 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-954233,},FirstTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,LastTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-954233,}"
	Nov 23 10:07:20 test-preload-954233 kubelet[716]: W1123 10:07:20.738362     716 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 23 10:07:20 test-preload-954233 kubelet[716]: E1123 10:07:20.738437     716 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 23 10:07:21 test-preload-954233 kubelet[716]: E1123 10:07:21.880733     716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-954233?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 23 10:07:22 test-preload-954233 kubelet[716]: I1123 10:07:22.053731     716 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-954233"
	Nov 23 10:07:22 test-preload-954233 kubelet[716]: E1123 10:07:22.054160     716 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-954233"
	Nov 23 10:07:26 test-preload-954233 kubelet[716]: E1123 10:07:26.255289     716 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-954233\" not found"
	Nov 23 10:07:27 test-preload-954233 kubelet[716]: E1123 10:07:27.484504     716 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-954233.187a9a81c5eb3841  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-954233,UID:test-preload-954233,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-954233 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-954233,},FirstTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,LastTimestamp:2025-11-23 10:01:26.230956097 +0000 UTC m=+0.076060188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-954233,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-954233 -n test-preload-954233
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-954233 -n test-preload-954233: exit status 2 (302.735521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-954233" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-954233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-954233
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-954233: (2.320433841s)
--- FAIL: TestPreload (439.01s)

                                                
                                    
x
+
TestPause/serial/Pause (7.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-528307 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-528307 --alsologtostderr -v=5: exit status 80 (3.02480517s)

                                                
                                                
-- stdout --
	* Pausing node pause-528307 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:10:18.910877  251210 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:18.911272  251210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:18.911287  251210 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:18.911295  251210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:18.911647  251210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:10:18.912030  251210 out.go:368] Setting JSON to false
	I1123 10:10:18.912061  251210 mustload.go:66] Loading cluster: pause-528307
	I1123 10:10:18.912675  251210 config.go:182] Loaded profile config "pause-528307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:18.913308  251210 cli_runner.go:164] Run: docker container inspect pause-528307 --format={{.State.Status}}
	I1123 10:10:18.936030  251210 host.go:66] Checking if "pause-528307" exists ...
	I1123 10:10:18.936422  251210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:19.009194  251210 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:102 SystemTime:2025-11-23 10:10:18.997545127 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:19.009975  251210 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-528307 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:10:19.013491  251210 out.go:179] * Pausing node pause-528307 ... 
	I1123 10:10:19.014587  251210 host.go:66] Checking if "pause-528307" exists ...
	I1123 10:10:19.014945  251210 ssh_runner.go:195] Run: systemctl --version
	I1123 10:10:19.015005  251210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-528307
	I1123 10:10:19.037767  251210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/pause-528307/id_rsa Username:docker}
	I1123 10:10:19.150180  251210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:19.167319  251210 pause.go:52] kubelet running: true
	I1123 10:10:19.167423  251210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:19.334258  251210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:19.334379  251210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:19.417141  251210 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:19.417175  251210 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:19.417182  251210 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:19.417188  251210 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:19.417193  251210 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:19.417198  251210 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:19.417202  251210 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:19.417207  251210 cri.go:89] found id: ""
	I1123 10:10:19.417259  251210 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:19.432711  251210 retry.go:31] will retry after 215.986099ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:19.649344  251210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:19.666585  251210 pause.go:52] kubelet running: false
	I1123 10:10:19.666655  251210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:19.813826  251210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:19.813952  251210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:19.901148  251210 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:19.901174  251210 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:19.901179  251210 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:19.901183  251210 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:19.901186  251210 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:19.901189  251210 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:19.901193  251210 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:19.901198  251210 cri.go:89] found id: ""
	I1123 10:10:19.901248  251210 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:19.914944  251210 retry.go:31] will retry after 558.346367ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:20.473710  251210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:20.487073  251210 pause.go:52] kubelet running: false
	I1123 10:10:20.487153  251210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:20.594248  251210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:20.594342  251210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:20.660399  251210 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:20.660426  251210 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:20.660433  251210 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:20.660437  251210 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:20.660441  251210 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:20.660445  251210 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:20.660449  251210 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:20.660453  251210 cri.go:89] found id: ""
	I1123 10:10:20.660502  251210 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:20.672644  251210 retry.go:31] will retry after 343.589876ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:20Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:21.017276  251210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:21.030837  251210 pause.go:52] kubelet running: false
	I1123 10:10:21.030902  251210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:21.154463  251210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:21.154561  251210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:21.220808  251210 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:21.220836  251210 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:21.220842  251210 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:21.220847  251210 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:21.220852  251210 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:21.220856  251210 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:21.220860  251210 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:21.220863  251210 cri.go:89] found id: ""
	I1123 10:10:21.220903  251210 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:21.389840  251210 out.go:203] 
	W1123 10:10:21.520419  251210 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:10:21.520444  251210 out.go:285] * 
	* 
	W1123 10:10:21.524911  251210 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:10:21.689492  251210 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-528307 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-528307
helpers_test.go:243: (dbg) docker inspect pause-528307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80",
	        "Created": "2025-11-23T10:09:31.913438807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:09:32.239333335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/hosts",
	        "LogPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80-json.log",
	        "Name": "/pause-528307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-528307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-528307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80",
	                "LowerDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-528307",
	                "Source": "/var/lib/docker/volumes/pause-528307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-528307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-528307",
	                "name.minikube.sigs.k8s.io": "pause-528307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4c68877e69de24fa1914d851de1975254adad1b4a51799b1f7dab2e565bab7ca",
	            "SandboxKey": "/var/run/docker/netns/4c68877e69de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-528307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cce7abf031edbafbd41cc78adbddea4b355a90181d22b37bccc90851bb53148d",
	                    "EndpointID": "2b6b688af3d5bf2426b01336d1fe77d810056f454466759f05b4f19c565187c4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:eb:ef:5a:47:0c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-528307",
	                        "ddcf7f443532"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-528307 -n pause-528307
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-528307 -n pause-528307: exit status 2 (329.927436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-528307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-528307 logs -n 25: (1.642317287s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-954233                                                                                                                   │ test-preload-954233         │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ start   │ -p scheduled-stop-474690 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --cancel-scheduled                                                                                              │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p scheduled-stop-474690                                                                                                                 │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p insufficient-storage-001514 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-001514 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ delete  │ -p insufficient-storage-001514                                                                                                           │ insufficient-storage-001514 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p offline-crio-065092 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-065092         │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p pause-528307 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p missing-upgrade-417054 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-417054      │ jenkins │ v1.32.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-069634                                                                                                             │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ start   │ -p pause-528307 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p offline-crio-065092                                                                                                                   │ offline-crio-065092         │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p force-systemd-env-465707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-465707    │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ pause   │ -p pause-528307 --alsologtostderr -v=5                                                                                                   │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:10:15
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:10:15.861287  250596 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:15.861601  250596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:15.861614  250596 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:15.861622  250596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:15.861898  250596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:10:15.862449  250596 out.go:368] Setting JSON to false
	I1123 10:10:15.863627  250596 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10357,"bootTime":1763882259,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:10:15.863713  250596 start.go:143] virtualization: kvm guest
	I1123 10:10:15.866689  250596 out.go:179] * [force-systemd-env-465707] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:10:15.868219  250596 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:10:15.868212  250596 notify.go:221] Checking for updates...
	I1123 10:10:15.869711  250596 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:15.870981  250596 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:10:15.872194  250596 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:10:15.873410  250596 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:10:15.874557  250596 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1123 10:10:15.876566  250596 config.go:182] Loaded profile config "kubernetes-upgrade-069634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:15.876769  250596 config.go:182] Loaded profile config "missing-upgrade-417054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1123 10:10:15.876944  250596 config.go:182] Loaded profile config "pause-528307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:15.877067  250596 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:15.906025  250596 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:10:15.906244  250596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:15.980798  250596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-23 10:10:15.968550175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:15.980957  250596 docker.go:319] overlay module found
	I1123 10:10:15.983947  250596 out.go:179] * Using the docker driver based on user configuration
	I1123 10:10:15.985249  250596 start.go:309] selected driver: docker
	I1123 10:10:15.985269  250596 start.go:927] validating driver "docker" against <nil>
	I1123 10:10:15.985285  250596 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:15.986061  250596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:16.059810  250596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-23 10:10:16.047427531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:16.060024  250596 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:10:16.060317  250596 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:10:16.063135  250596 out.go:179] * Using Docker driver with root privileges
	I1123 10:10:16.064590  250596 cni.go:84] Creating CNI manager for ""
	I1123 10:10:16.064672  250596 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:16.064688  250596 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:10:16.064795  250596 start.go:353] cluster config:
	{Name:force-systemd-env-465707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-465707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:16.066232  250596 out.go:179] * Starting "force-systemd-env-465707" primary control-plane node in "force-systemd-env-465707" cluster
	I1123 10:10:16.067567  250596 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:10:16.068863  250596 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:10:16.070049  250596 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:16.070105  250596 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:10:16.070119  250596 cache.go:65] Caching tarball of preloaded images
	I1123 10:10:16.070144  250596 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:10:16.070234  250596 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:10:16.070251  250596 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:10:16.070380  250596 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/force-systemd-env-465707/config.json ...
	I1123 10:10:16.070417  250596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/force-systemd-env-465707/config.json: {Name:mk5130267bb7f0d446b287ca283f1c4507614563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.096213  250596 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:10:16.096249  250596 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:10:16.096269  250596 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:10:16.096313  250596 start.go:360] acquireMachinesLock for force-systemd-env-465707: {Name:mk4315d27d905a91bdd8d22a15cc79647e055ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:16.096431  250596 start.go:364] duration metric: took 92.327µs to acquireMachinesLock for "force-systemd-env-465707"
	I1123 10:10:16.096463  250596 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-465707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-465707 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:16.096588  250596 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:10:15.523142  249292 cli_runner.go:164] Run: docker network inspect pause-528307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:15.545366  249292 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:10:15.551248  249292 kubeadm.go:884] updating cluster {Name:pause-528307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:10:15.551430  249292 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:15.551497  249292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:15.593830  249292 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:15.593858  249292 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:10:15.593913  249292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:15.627253  249292 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:15.627282  249292 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:10:15.627292  249292 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:10:15.627430  249292 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-528307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:10:15.627520  249292 ssh_runner.go:195] Run: crio config
	I1123 10:10:15.687059  249292 cni.go:84] Creating CNI manager for ""
	I1123 10:10:15.687096  249292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:15.687117  249292 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:10:15.687146  249292 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-528307 NodeName:pause-528307 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:10:15.687323  249292 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-528307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:10:15.687403  249292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:10:15.697523  249292 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:10:15.697596  249292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:10:15.707548  249292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 10:10:15.724429  249292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:10:15.742570  249292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 10:10:15.760019  249292 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:10:15.765266  249292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:15.917613  249292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:15.936689  249292 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307 for IP: 192.168.76.2
	I1123 10:10:15.936714  249292 certs.go:195] generating shared ca certs ...
	I1123 10:10:15.936746  249292 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:15.936913  249292 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:10:15.936987  249292 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:10:15.936999  249292 certs.go:257] generating profile certs ...
	I1123 10:10:15.937129  249292 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key
	I1123 10:10:15.937208  249292 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.key.959c932b
	I1123 10:10:15.937263  249292 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.key
	I1123 10:10:15.937435  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:10:15.937489  249292 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:10:15.937501  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:10:15.937538  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:10:15.937571  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:10:15.937600  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:10:15.937657  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:10:15.938665  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:10:15.964982  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:10:15.989451  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:10:16.013076  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:10:16.037142  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 10:10:16.061700  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:10:16.084425  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:10:16.108076  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:10:16.131237  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:10:16.154483  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:10:16.178588  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:10:16.203423  249292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:10:16.220821  249292 ssh_runner.go:195] Run: openssl version
	I1123 10:10:16.229386  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:10:16.242459  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.247681  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.247750  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.299356  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:10:16.311163  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:10:16.324675  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.330939  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.331030  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.382395  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:10:16.393675  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:10:16.406306  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.411657  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.411731  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.465750  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:10:16.477038  249292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:10:16.482508  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:10:16.536830  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:10:16.588493  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:10:16.642133  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:10:16.697960  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:10:16.743749  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:10:16.797685  249292 kubeadm.go:401] StartCluster: {Name:pause-528307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:16.797857  249292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:10:16.797953  249292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:10:16.837735  249292 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:16.837767  249292 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:16.837773  249292 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:16.837777  249292 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:16.837782  249292 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:16.837786  249292 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:16.837791  249292 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:16.837796  249292 cri.go:89] found id: ""
	I1123 10:10:16.837853  249292 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:10:16.855001  249292 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:16.855195  249292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:10:16.871084  249292 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:10:16.871125  249292 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:10:16.871185  249292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:10:16.881736  249292 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:10:16.882640  249292 kubeconfig.go:125] found "pause-528307" server: "https://192.168.76.2:8443"
	I1123 10:10:16.883918  249292 kapi.go:59] client config for pause-528307: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:10:16.884636  249292 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 10:10:16.884658  249292 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 10:10:16.884666  249292 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 10:10:16.884672  249292 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 10:10:16.884678  249292 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 10:10:16.885125  249292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:10:16.896803  249292 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:10:16.896845  249292 kubeadm.go:602] duration metric: took 25.714195ms to restartPrimaryControlPlane
	I1123 10:10:16.896859  249292 kubeadm.go:403] duration metric: took 99.185241ms to StartCluster
	I1123 10:10:16.896881  249292 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.896976  249292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:10:16.897862  249292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.898144  249292 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:16.898265  249292 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:10:16.898480  249292 config.go:182] Loaded profile config "pause-528307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:16.900373  249292 out.go:179] * Verifying Kubernetes components...
	I1123 10:10:16.900450  249292 out.go:179] * Enabled addons: 
	I1123 10:10:16.901940  249292 addons.go:530] duration metric: took 3.684025ms for enable addons: enabled=[]
	I1123 10:10:16.901979  249292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:17.059661  249292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:17.077139  249292 node_ready.go:35] waiting up to 6m0s for node "pause-528307" to be "Ready" ...
	I1123 10:10:17.086191  249292 node_ready.go:49] node "pause-528307" is "Ready"
	I1123 10:10:17.086219  249292 node_ready.go:38] duration metric: took 9.031479ms for node "pause-528307" to be "Ready" ...
	I1123 10:10:17.086238  249292 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:17.086304  249292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:17.100057  249292 api_server.go:72] duration metric: took 201.865793ms to wait for apiserver process to appear ...
	I1123 10:10:17.100107  249292 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:17.100134  249292 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:10:17.105121  249292 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:10:17.106349  249292 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:17.106381  249292 api_server.go:131] duration metric: took 6.264006ms to wait for apiserver health ...
	I1123 10:10:17.106392  249292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:17.109884  249292 system_pods.go:59] 7 kube-system pods found
	I1123 10:10:17.109919  249292 system_pods.go:61] "coredns-66bc5c9577-nnglq" [9f32f9bd-be71-440e-a1a2-7c971ea27ff4] Running
	I1123 10:10:17.109935  249292 system_pods.go:61] "etcd-pause-528307" [d9f542bc-1732-42da-8fb4-cca3b910dbd8] Running
	I1123 10:10:17.109940  249292 system_pods.go:61] "kindnet-mh9dq" [45e79ade-c9ae-4302-b726-b23f6a52c9ff] Running
	I1123 10:10:17.109946  249292 system_pods.go:61] "kube-apiserver-pause-528307" [662044ca-d2ca-4362-9784-c882f2824c63] Running
	I1123 10:10:17.109952  249292 system_pods.go:61] "kube-controller-manager-pause-528307" [29f640fa-d5fc-41c6-9073-52f21609dfa8] Running
	I1123 10:10:17.109960  249292 system_pods.go:61] "kube-proxy-jgn4v" [6a8ef8ed-fd78-4dab-b4f2-d48f8e87169e] Running
	I1123 10:10:17.109963  249292 system_pods.go:61] "kube-scheduler-pause-528307" [8f258b36-44c3-46cd-bca4-2e3f836cdb4d] Running
	I1123 10:10:17.109970  249292 system_pods.go:74] duration metric: took 3.571281ms to wait for pod list to return data ...
	I1123 10:10:17.109984  249292 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:17.112040  249292 default_sa.go:45] found service account: "default"
	I1123 10:10:17.112063  249292 default_sa.go:55] duration metric: took 2.070706ms for default service account to be created ...
	I1123 10:10:17.112073  249292 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:17.116650  249292 system_pods.go:86] 7 kube-system pods found
	I1123 10:10:17.116679  249292 system_pods.go:89] "coredns-66bc5c9577-nnglq" [9f32f9bd-be71-440e-a1a2-7c971ea27ff4] Running
	I1123 10:10:17.116687  249292 system_pods.go:89] "etcd-pause-528307" [d9f542bc-1732-42da-8fb4-cca3b910dbd8] Running
	I1123 10:10:17.116692  249292 system_pods.go:89] "kindnet-mh9dq" [45e79ade-c9ae-4302-b726-b23f6a52c9ff] Running
	I1123 10:10:17.116697  249292 system_pods.go:89] "kube-apiserver-pause-528307" [662044ca-d2ca-4362-9784-c882f2824c63] Running
	I1123 10:10:17.116703  249292 system_pods.go:89] "kube-controller-manager-pause-528307" [29f640fa-d5fc-41c6-9073-52f21609dfa8] Running
	I1123 10:10:17.116707  249292 system_pods.go:89] "kube-proxy-jgn4v" [6a8ef8ed-fd78-4dab-b4f2-d48f8e87169e] Running
	I1123 10:10:17.116713  249292 system_pods.go:89] "kube-scheduler-pause-528307" [8f258b36-44c3-46cd-bca4-2e3f836cdb4d] Running
	I1123 10:10:17.116722  249292 system_pods.go:126] duration metric: took 4.641327ms to wait for k8s-apps to be running ...
	I1123 10:10:17.116739  249292 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:17.116790  249292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:17.134213  249292 system_svc.go:56] duration metric: took 17.455018ms WaitForService to wait for kubelet
	I1123 10:10:17.134251  249292 kubeadm.go:587] duration metric: took 236.066754ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:17.134283  249292 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:17.137677  249292 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:10:17.137712  249292 node_conditions.go:123] node cpu capacity is 8
	I1123 10:10:17.137729  249292 node_conditions.go:105] duration metric: took 3.439254ms to run NodePressure ...
	I1123 10:10:17.137745  249292 start.go:242] waiting for startup goroutines ...
	I1123 10:10:17.137754  249292 start.go:247] waiting for cluster config update ...
	I1123 10:10:17.137765  249292 start.go:256] writing updated cluster config ...
	I1123 10:10:17.138143  249292 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:17.143124  249292 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:17.143684  249292 kapi.go:59] client config for pause-528307: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:10:17.146812  249292 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnglq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.152165  249292 pod_ready.go:94] pod "coredns-66bc5c9577-nnglq" is "Ready"
	I1123 10:10:17.152195  249292 pod_ready.go:86] duration metric: took 5.358234ms for pod "coredns-66bc5c9577-nnglq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.154755  249292 pod_ready.go:83] waiting for pod "etcd-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.159547  249292 pod_ready.go:94] pod "etcd-pause-528307" is "Ready"
	I1123 10:10:17.159576  249292 pod_ready.go:86] duration metric: took 4.794576ms for pod "etcd-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.161838  249292 pod_ready.go:83] waiting for pod "kube-apiserver-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.165952  249292 pod_ready.go:94] pod "kube-apiserver-pause-528307" is "Ready"
	I1123 10:10:17.166015  249292 pod_ready.go:86] duration metric: took 4.153656ms for pod "kube-apiserver-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.167966  249292 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.547997  249292 pod_ready.go:94] pod "kube-controller-manager-pause-528307" is "Ready"
	I1123 10:10:17.548034  249292 pod_ready.go:86] duration metric: took 380.046729ms for pod "kube-controller-manager-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.747583  249292 pod_ready.go:83] waiting for pod "kube-proxy-jgn4v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.147663  249292 pod_ready.go:94] pod "kube-proxy-jgn4v" is "Ready"
	I1123 10:10:18.147691  249292 pod_ready.go:86] duration metric: took 400.076957ms for pod "kube-proxy-jgn4v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.346944  249292 pod_ready.go:83] waiting for pod "kube-scheduler-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.747707  249292 pod_ready.go:94] pod "kube-scheduler-pause-528307" is "Ready"
	I1123 10:10:18.747742  249292 pod_ready.go:86] duration metric: took 400.765022ms for pod "kube-scheduler-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.747762  249292 pod_ready.go:40] duration metric: took 1.604591375s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:18.806282  249292 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:10:18.812260  249292 out.go:179] * Done! kubectl is now configured to use "pause-528307" cluster and "default" namespace by default
	I1123 10:10:16.287939  246976 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1123 10:10:16.288016  246976 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:10:16.100647  250596 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:10:16.100921  250596 start.go:159] libmachine.API.Create for "force-systemd-env-465707" (driver="docker")
	I1123 10:10:16.100972  250596 client.go:173] LocalClient.Create starting
	I1123 10:10:16.101103  250596 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:10:16.101152  250596 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:16.101171  250596 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:16.101278  250596 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:10:16.101316  250596 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:16.101335  250596 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:16.101774  250596 cli_runner.go:164] Run: docker network inspect force-systemd-env-465707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:10:16.123765  250596 cli_runner.go:211] docker network inspect force-systemd-env-465707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:10:16.123865  250596 network_create.go:284] running [docker network inspect force-systemd-env-465707] to gather additional debugging logs...
	I1123 10:10:16.123898  250596 cli_runner.go:164] Run: docker network inspect force-systemd-env-465707
	W1123 10:10:16.144704  250596 cli_runner.go:211] docker network inspect force-systemd-env-465707 returned with exit code 1
	I1123 10:10:16.144744  250596 network_create.go:287] error running [docker network inspect force-systemd-env-465707]: docker network inspect force-systemd-env-465707: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-465707 not found
	I1123 10:10:16.144762  250596 network_create.go:289] output of [docker network inspect force-systemd-env-465707]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-465707 not found
	
	** /stderr **
	I1123 10:10:16.144894  250596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:16.167573  250596 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:10:16.168226  250596 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:10:16.168837  250596 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:10:16.169524  250596 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cce7abf031ed IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:5f:a9:f8:75:ce} reservation:<nil>}
	I1123 10:10:16.170483  250596 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e35ec0}
	I1123 10:10:16.170512  250596 network_create.go:124] attempt to create docker network force-systemd-env-465707 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:10:16.170580  250596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-465707 force-systemd-env-465707
	I1123 10:10:16.231514  250596 network_create.go:108] docker network force-systemd-env-465707 192.168.85.0/24 created
	I1123 10:10:16.231551  250596 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-465707" container
	I1123 10:10:16.231636  250596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:10:16.253263  250596 cli_runner.go:164] Run: docker volume create force-systemd-env-465707 --label name.minikube.sigs.k8s.io=force-systemd-env-465707 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:10:16.276289  250596 oci.go:103] Successfully created a docker volume force-systemd-env-465707
	I1123 10:10:16.276392  250596 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-465707-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-465707 --entrypoint /usr/bin/test -v force-systemd-env-465707:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:10:17.051622  250596 oci.go:107] Successfully prepared a docker volume force-systemd-env-465707
	I1123 10:10:17.051715  250596 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:17.051733  250596 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:10:17.051816  250596 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-465707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.217774265Z" level=info msg="RDT not available in the host system"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.217787743Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.218791549Z" level=info msg="Conmon does support the --sync option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.218818977Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.2188387Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.219653094Z" level=info msg="Conmon does support the --sync option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.219674824Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224050814Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224079449Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224886769Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.225586397Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.225651697Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.393481266Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-nnglq Namespace:kube-system ID:27190eb86e4b14a93c6a425ae9a569ecd6400df16cc523bd12073f6549002674 UID:9f32f9bd-be71-440e-a1a2-7c971ea27ff4 NetNS:/var/run/netns/f4e362c5-100d-45d4-ace1-d13ca8f74002 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009840b8}] Aliases:map[]}"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.393662395Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-nnglq for CNI network kindnet (type=ptp)"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394142728Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394171141Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394243647Z" level=info msg="Create NRI interface"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394355852Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.39437114Z" level=info msg="runtime interface created"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394384783Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394392672Z" level=info msg="runtime interface starting up..."
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394400917Z" level=info msg="starting plugins..."
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394416154Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394748091Z" level=info msg="No systemd watchdog enabled"
	Nov 23 10:10:15 pause-528307 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1ad0225f3a144       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   27190eb86e4b1       coredns-66bc5c9577-nnglq               kube-system
	d28ec85d418ca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   6ebd779429110       kindnet-mh9dq                          kube-system
	ac99950a7e098       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   178ce4605b1e9       kube-proxy-jgn4v                       kube-system
	a2749d18f881b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago      Running             kube-controller-manager   0                   683273d014237       kube-controller-manager-pause-528307   kube-system
	244846b68f3f5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago      Running             kube-apiserver            0                   7d95a91920606       kube-apiserver-pause-528307            kube-system
	c36df78f28a7c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago      Running             kube-scheduler            0                   ad0d693a3f027       kube-scheduler-pause-528307            kube-system
	d5bd829f0253e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago      Running             etcd                      0                   1092e0f415f91       etcd-pause-528307                      kube-system
	
	
	==> coredns [1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33569 - 21917 "HINFO IN 3934179666926209465.5054150719438489396. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041515213s
	
	
	==> describe nodes <==
	Name:               pause-528307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-528307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=pause-528307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_09_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-528307
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:10:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-528307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                878524ca-ae1f-4dff-a09b-2d2512c29616
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nnglq                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-528307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-mh9dq                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-528307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-528307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-jgn4v                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-528307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-528307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-528307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-528307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-528307 event: Registered Node pause-528307 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-528307 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:25] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.037608] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023966] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +2.048049] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	
	
	==> etcd [d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c] <==
	{"level":"warn","ts":"2025-11-23T10:09:47.041962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:47.049689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:47.119349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:54.748179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.512334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-23T10:09:54.748268Z","caller":"traceutil/trace.go:172","msg":"trace[301164747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:296; }","duration":"115.632574ms","start":"2025-11-23T10:09:54.632619Z","end":"2025-11-23T10:09:54.748251Z","steps":["trace[301164747] 'agreement among raft nodes before linearized reading'  (duration: 97.283944ms)","trace[301164747] 'range keys from in-memory index tree'  (duration: 18.127171ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:09:54.748281Z","caller":"traceutil/trace.go:172","msg":"trace[1292088339] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"153.902543ms","start":"2025-11-23T10:09:54.594353Z","end":"2025-11-23T10:09:54.748255Z","steps":["trace[1292088339] 'process raft request'  (duration: 135.595316ms)","trace[1292088339] 'compare'  (duration: 18.104584ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:09:54.748310Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.450961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/pause-528307\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-11-23T10:09:54.748361Z","caller":"traceutil/trace.go:172","msg":"trace[1410084704] range","detail":"{range_begin:/registry/leases/kube-node-lease/pause-528307; range_end:; response_count:1; response_revision:297; }","duration":"113.507306ms","start":"2025-11-23T10:09:54.634842Z","end":"2025-11-23T10:09:54.748349Z","steps":["trace[1410084704] 'agreement among raft nodes before linearized reading'  (duration: 113.376917ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:04.522549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.52702ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837615740364 > lease_revoke:<id:59069ab030e323f8>","response":"size:28"}
	{"level":"warn","ts":"2025-11-23T10:10:07.547413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.152775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-nnglq\" limit:1 ","response":"range_response_count:1 size:5545"}
	{"level":"info","ts":"2025-11-23T10:10:07.547486Z","caller":"traceutil/trace.go:172","msg":"trace[1265247078] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-nnglq; range_end:; response_count:1; response_revision:391; }","duration":"169.23842ms","start":"2025-11-23T10:10:07.378230Z","end":"2025-11-23T10:10:07.547468Z","steps":["trace[1265247078] 'range keys from in-memory index tree'  (duration: 168.972864ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:07.547628Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.886698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T10:10:07.547738Z","caller":"traceutil/trace.go:172","msg":"trace[427600288] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:391; }","duration":"199.004203ms","start":"2025-11-23T10:10:07.348721Z","end":"2025-11-23T10:10:07.547725Z","steps":["trace[427600288] 'range keys from in-memory index tree'  (duration: 198.78509ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.733506Z","caller":"traceutil/trace.go:172","msg":"trace[2034828500] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"177.638506ms","start":"2025-11-23T10:10:07.555841Z","end":"2025-11-23T10:10:07.733479Z","steps":["trace[2034828500] 'process raft request'  (duration: 177.546366ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894589Z","caller":"traceutil/trace.go:172","msg":"trace[2051461231] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"152.080948ms","start":"2025-11-23T10:10:07.742486Z","end":"2025-11-23T10:10:07.894567Z","steps":["trace[2051461231] 'process raft request'  (duration: 152.042484ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894679Z","caller":"traceutil/trace.go:172","msg":"trace[1040618974] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"156.727443ms","start":"2025-11-23T10:10:07.737931Z","end":"2025-11-23T10:10:07.894659Z","steps":["trace[1040618974] 'process raft request'  (duration: 156.54679ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894705Z","caller":"traceutil/trace.go:172","msg":"trace[2037295264] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"156.816247ms","start":"2025-11-23T10:10:07.737867Z","end":"2025-11-23T10:10:07.894683Z","steps":["trace[2037295264] 'process raft request'  (duration: 59.367872ms)","trace[2037295264] 'compare'  (duration: 97.134973ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:10:08.035872Z","caller":"traceutil/trace.go:172","msg":"trace[1903948012] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"134.617317ms","start":"2025-11-23T10:10:07.901239Z","end":"2025-11-23T10:10:08.035856Z","steps":["trace[1903948012] 'process raft request'  (duration: 134.562726ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:08.035888Z","caller":"traceutil/trace.go:172","msg":"trace[1541417610] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"135.699327ms","start":"2025-11-23T10:10:07.900164Z","end":"2025-11-23T10:10:08.035864Z","steps":["trace[1541417610] 'process raft request'  (duration: 108.646019ms)","trace[1541417610] 'compare'  (duration: 26.904879ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:10:08.197200Z","caller":"traceutil/trace.go:172","msg":"trace[184438368] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"116.540425ms","start":"2025-11-23T10:10:08.080642Z","end":"2025-11-23T10:10:08.197182Z","steps":["trace[184438368] 'process raft request'  (duration: 116.485201ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:08.197282Z","caller":"traceutil/trace.go:172","msg":"trace[890774198] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"153.492432ms","start":"2025-11-23T10:10:08.043777Z","end":"2025-11-23T10:10:08.197269Z","steps":["trace[890774198] 'process raft request'  (duration: 92.049271ms)","trace[890774198] 'compare'  (duration: 61.140163ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:10:08.463012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.127495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-528307\" limit:1 ","response":"range_response_count:1 size:7283"}
	{"level":"info","ts":"2025-11-23T10:10:08.463071Z","caller":"traceutil/trace.go:172","msg":"trace[1553662466] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-528307; range_end:; response_count:1; response_revision:400; }","duration":"161.200039ms","start":"2025-11-23T10:10:08.301859Z","end":"2025-11-23T10:10:08.463059Z","steps":["trace[1553662466] 'range keys from in-memory index tree'  (duration: 160.974494ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:08.463024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.730238ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T10:10:08.463128Z","caller":"traceutil/trace.go:172","msg":"trace[1365454372] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:400; }","duration":"114.824137ms","start":"2025-11-23T10:10:08.348286Z","end":"2025-11-23T10:10:08.463111Z","steps":["trace[1365454372] 'range keys from in-memory index tree'  (duration: 114.604074ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:10:23 up  2:52,  0 user,  load average: 5.29, 1.81, 1.20
	Linux pause-528307 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2] <==
	I1123 10:09:56.413213       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:09:56.413478       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:09:56.413629       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:09:56.413649       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:09:56.413663       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:09:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:09:56.613252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:09:56.741434       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:09:56.741463       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:09:56.741633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:09:57.012215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:09:57.012243       1 metrics.go:72] Registering metrics
	I1123 10:09:57.012285       1 controller.go:711] "Syncing nftables rules"
	I1123 10:10:06.615415       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:06.615475       1 main.go:301] handling current node
	I1123 10:10:16.613247       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:16.613290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8] <==
	I1123 10:09:47.714379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:09:47.714386       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:09:47.715183       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:09:47.716041       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:09:47.716674       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.721406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.724514       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:09:47.744939       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:09:48.618462       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:09:48.622398       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:09:48.622416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:09:49.124351       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:09:49.168065       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:09:49.222064       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:09:49.227781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:09:49.228730       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:09:49.232719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:09:49.636150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:09:50.398980       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:09:50.410454       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:09:50.422894       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:09:55.365691       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:09:55.541476       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:55.547757       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:55.739216       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d] <==
	I1123 10:09:54.635223       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:09:54.635246       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:09:54.635270       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:09:54.635349       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:09:54.635463       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:09:54.636693       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:09:54.636764       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:09:54.639017       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:09:54.639042       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:09:54.639047       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:09:54.641298       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.642461       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 10:09:54.644643       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:09:54.646870       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:09:54.649030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:09:54.649071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:09:54.649097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:09:54.650334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:09:54.653560       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:09:54.653591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:09:54.656255       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.660047       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:09:54.676317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:09:54.749752       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-528307" podCIDRs=["10.244.0.0/24"]
	I1123 10:10:09.594204       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e] <==
	I1123 10:09:56.179262       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:09:56.235685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:09:56.336252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:09:56.336296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:09:56.336378       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:09:56.358196       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:09:56.358268       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:09:56.364630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:09:56.365116       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:09:56.365146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:56.366523       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:09:56.366553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:09:56.366632       1 config.go:200] "Starting service config controller"
	I1123 10:09:56.366642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:09:56.366663       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:09:56.366671       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:09:56.366685       1 config.go:309] "Starting node config controller"
	I1123 10:09:56.366690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:09:56.466907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:09:56.466940       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:09:56.466968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:09:56.466949       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b] <==
	E1123 10:09:47.682240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:09:47.682548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:09:47.683537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:09:47.683730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:47.683812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:09:47.683841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:09:47.683865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:09:47.683945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:09:47.684370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:09:47.684967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:09:48.562370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:09:48.611266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:09:48.633574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:09:48.680881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:09:48.740295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:09:48.756449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:09:48.773659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:09:48.843066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:09:48.886811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:09:48.916178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:48.919223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:09:48.940402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:09:48.967740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:09:48.970297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1123 10:09:50.978696       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.321577    1315 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.321595    1315 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385413    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385470    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385487    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.321497    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321599    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321662    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321678    1315 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321690    1315 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390299    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390355    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390367    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.422649    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.598622    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.870660    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.330906    1315 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.331059    1315 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390587    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390649    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390665    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:19 pause-528307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:10:19 pause-528307 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:10:19 pause-528307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:10:19 pause-528307 systemd[1]: kubelet.service: Consumed 1.292s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-528307 -n pause-528307
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-528307 -n pause-528307: exit status 2 (343.190828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-528307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-528307
helpers_test.go:243: (dbg) docker inspect pause-528307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80",
	        "Created": "2025-11-23T10:09:31.913438807Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:09:32.239333335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/hosts",
	        "LogPath": "/var/lib/docker/containers/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80/ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80-json.log",
	        "Name": "/pause-528307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-528307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-528307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ddcf7f443532024c4e4190a773d46febb6a13d2534b1a8dbafbf58e4e1307b80",
	                "LowerDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/115720dd1f64b8a38e4e46f1b263c3b5bd7272d85e6b40b1b190e3cb1c4d63c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-528307",
	                "Source": "/var/lib/docker/volumes/pause-528307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-528307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-528307",
	                "name.minikube.sigs.k8s.io": "pause-528307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4c68877e69de24fa1914d851de1975254adad1b4a51799b1f7dab2e565bab7ca",
	            "SandboxKey": "/var/run/docker/netns/4c68877e69de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-528307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cce7abf031edbafbd41cc78adbddea4b355a90181d22b37bccc90851bb53148d",
	                    "EndpointID": "2b6b688af3d5bf2426b01336d1fe77d810056f454466759f05b4f19c565187c4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:eb:ef:5a:47:0c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-528307",
	                        "ddcf7f443532"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-528307 -n pause-528307
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-528307 -n pause-528307: exit status 2 (334.942549ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-528307 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-954233                                                                                                                   │ test-preload-954233         │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ start   │ -p scheduled-stop-474690 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --cancel-scheduled                                                                                              │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ stop    │ -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p scheduled-stop-474690                                                                                                                 │ scheduled-stop-474690       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p insufficient-storage-001514 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-001514 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ delete  │ -p insufficient-storage-001514                                                                                                           │ insufficient-storage-001514 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p offline-crio-065092 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-065092         │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p pause-528307 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p missing-upgrade-417054 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-417054      │ jenkins │ v1.32.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-069634                                                                                                             │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-069634   │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ start   │ -p pause-528307 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p offline-crio-065092                                                                                                                   │ offline-crio-065092         │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p force-systemd-env-465707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-465707    │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ pause   │ -p pause-528307 --alsologtostderr -v=5                                                                                                   │ pause-528307                │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:10:15
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:10:15.861287  250596 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:15.861601  250596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:15.861614  250596 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:15.861622  250596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:15.861898  250596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:10:15.862449  250596 out.go:368] Setting JSON to false
	I1123 10:10:15.863627  250596 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10357,"bootTime":1763882259,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:10:15.863713  250596 start.go:143] virtualization: kvm guest
	I1123 10:10:15.866689  250596 out.go:179] * [force-systemd-env-465707] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:10:15.868219  250596 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:10:15.868212  250596 notify.go:221] Checking for updates...
	I1123 10:10:15.869711  250596 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:15.870981  250596 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:10:15.872194  250596 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:10:15.873410  250596 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:10:15.874557  250596 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1123 10:10:15.876566  250596 config.go:182] Loaded profile config "kubernetes-upgrade-069634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:15.876769  250596 config.go:182] Loaded profile config "missing-upgrade-417054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1123 10:10:15.876944  250596 config.go:182] Loaded profile config "pause-528307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:15.877067  250596 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:15.906025  250596 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:10:15.906244  250596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:15.980798  250596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-23 10:10:15.968550175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:15.980957  250596 docker.go:319] overlay module found
	I1123 10:10:15.983947  250596 out.go:179] * Using the docker driver based on user configuration
	I1123 10:10:15.985249  250596 start.go:309] selected driver: docker
	I1123 10:10:15.985269  250596 start.go:927] validating driver "docker" against <nil>
	I1123 10:10:15.985285  250596 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:15.986061  250596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:16.059810  250596 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-23 10:10:16.047427531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:16.060024  250596 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:10:16.060317  250596 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:10:16.063135  250596 out.go:179] * Using Docker driver with root privileges
	I1123 10:10:16.064590  250596 cni.go:84] Creating CNI manager for ""
	I1123 10:10:16.064672  250596 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:16.064688  250596 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:10:16.064795  250596 start.go:353] cluster config:
	{Name:force-systemd-env-465707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-465707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:16.066232  250596 out.go:179] * Starting "force-systemd-env-465707" primary control-plane node in "force-systemd-env-465707" cluster
	I1123 10:10:16.067567  250596 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:10:16.068863  250596 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:10:16.070049  250596 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:16.070105  250596 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:10:16.070119  250596 cache.go:65] Caching tarball of preloaded images
	I1123 10:10:16.070144  250596 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:10:16.070234  250596 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:10:16.070251  250596 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:10:16.070380  250596 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/force-systemd-env-465707/config.json ...
	I1123 10:10:16.070417  250596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/force-systemd-env-465707/config.json: {Name:mk5130267bb7f0d446b287ca283f1c4507614563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.096213  250596 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:10:16.096249  250596 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:10:16.096269  250596 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:10:16.096313  250596 start.go:360] acquireMachinesLock for force-systemd-env-465707: {Name:mk4315d27d905a91bdd8d22a15cc79647e055ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:16.096431  250596 start.go:364] duration metric: took 92.327µs to acquireMachinesLock for "force-systemd-env-465707"
	I1123 10:10:16.096463  250596 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-465707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-465707 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:16.096588  250596 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:10:15.523142  249292 cli_runner.go:164] Run: docker network inspect pause-528307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:15.545366  249292 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:10:15.551248  249292 kubeadm.go:884] updating cluster {Name:pause-528307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:10:15.551430  249292 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:15.551497  249292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:15.593830  249292 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:15.593858  249292 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:10:15.593913  249292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:15.627253  249292 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:15.627282  249292 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:10:15.627292  249292 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:10:15.627430  249292 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-528307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:10:15.627520  249292 ssh_runner.go:195] Run: crio config
	I1123 10:10:15.687059  249292 cni.go:84] Creating CNI manager for ""
	I1123 10:10:15.687096  249292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:15.687117  249292 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:10:15.687146  249292 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-528307 NodeName:pause-528307 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:10:15.687323  249292 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-528307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:10:15.687403  249292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:10:15.697523  249292 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:10:15.697596  249292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:10:15.707548  249292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 10:10:15.724429  249292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:10:15.742570  249292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 10:10:15.760019  249292 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:10:15.765266  249292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:15.917613  249292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:15.936689  249292 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307 for IP: 192.168.76.2
	I1123 10:10:15.936714  249292 certs.go:195] generating shared ca certs ...
	I1123 10:10:15.936746  249292 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:15.936913  249292 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:10:15.936987  249292 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:10:15.936999  249292 certs.go:257] generating profile certs ...
	I1123 10:10:15.937129  249292 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key
	I1123 10:10:15.937208  249292 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.key.959c932b
	I1123 10:10:15.937263  249292 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.key
	I1123 10:10:15.937435  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:10:15.937489  249292 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:10:15.937501  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:10:15.937538  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:10:15.937571  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:10:15.937600  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:10:15.937657  249292 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:10:15.938665  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:10:15.964982  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:10:15.989451  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:10:16.013076  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:10:16.037142  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 10:10:16.061700  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:10:16.084425  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:10:16.108076  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:10:16.131237  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:10:16.154483  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:10:16.178588  249292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:10:16.203423  249292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:10:16.220821  249292 ssh_runner.go:195] Run: openssl version
	I1123 10:10:16.229386  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:10:16.242459  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.247681  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.247750  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:10:16.299356  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:10:16.311163  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:10:16.324675  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.330939  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.331030  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:10:16.382395  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:10:16.393675  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:10:16.406306  249292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.411657  249292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.411731  249292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:16.465750  249292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:10:16.477038  249292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:10:16.482508  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:10:16.536830  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:10:16.588493  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:10:16.642133  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:10:16.697960  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:10:16.743749  249292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:10:16.797685  249292 kubeadm.go:401] StartCluster: {Name:pause-528307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-528307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:16.797857  249292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:10:16.797953  249292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:10:16.837735  249292 cri.go:89] found id: "1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8"
	I1123 10:10:16.837767  249292 cri.go:89] found id: "d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2"
	I1123 10:10:16.837773  249292 cri.go:89] found id: "ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e"
	I1123 10:10:16.837777  249292 cri.go:89] found id: "a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d"
	I1123 10:10:16.837782  249292 cri.go:89] found id: "244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8"
	I1123 10:10:16.837786  249292 cri.go:89] found id: "c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b"
	I1123 10:10:16.837791  249292 cri.go:89] found id: "d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c"
	I1123 10:10:16.837796  249292 cri.go:89] found id: ""
	I1123 10:10:16.837853  249292 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:10:16.855001  249292 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:16.855195  249292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:10:16.871084  249292 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:10:16.871125  249292 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:10:16.871185  249292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:10:16.881736  249292 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:10:16.882640  249292 kubeconfig.go:125] found "pause-528307" server: "https://192.168.76.2:8443"
	I1123 10:10:16.883918  249292 kapi.go:59] client config for pause-528307: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:10:16.884636  249292 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 10:10:16.884658  249292 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 10:10:16.884666  249292 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 10:10:16.884672  249292 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 10:10:16.884678  249292 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 10:10:16.885125  249292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:10:16.896803  249292 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:10:16.896845  249292 kubeadm.go:602] duration metric: took 25.714195ms to restartPrimaryControlPlane
	I1123 10:10:16.896859  249292 kubeadm.go:403] duration metric: took 99.185241ms to StartCluster
	I1123 10:10:16.896881  249292 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.896976  249292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:10:16.897862  249292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:16.898144  249292 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:16.898265  249292 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:10:16.898480  249292 config.go:182] Loaded profile config "pause-528307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:16.900373  249292 out.go:179] * Verifying Kubernetes components...
	I1123 10:10:16.900450  249292 out.go:179] * Enabled addons: 
	I1123 10:10:16.901940  249292 addons.go:530] duration metric: took 3.684025ms for enable addons: enabled=[]
	I1123 10:10:16.901979  249292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:17.059661  249292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:17.077139  249292 node_ready.go:35] waiting up to 6m0s for node "pause-528307" to be "Ready" ...
	I1123 10:10:17.086191  249292 node_ready.go:49] node "pause-528307" is "Ready"
	I1123 10:10:17.086219  249292 node_ready.go:38] duration metric: took 9.031479ms for node "pause-528307" to be "Ready" ...
	I1123 10:10:17.086238  249292 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:17.086304  249292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:17.100057  249292 api_server.go:72] duration metric: took 201.865793ms to wait for apiserver process to appear ...
	I1123 10:10:17.100107  249292 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:17.100134  249292 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:10:17.105121  249292 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:10:17.106349  249292 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:17.106381  249292 api_server.go:131] duration metric: took 6.264006ms to wait for apiserver health ...
	I1123 10:10:17.106392  249292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:17.109884  249292 system_pods.go:59] 7 kube-system pods found
	I1123 10:10:17.109919  249292 system_pods.go:61] "coredns-66bc5c9577-nnglq" [9f32f9bd-be71-440e-a1a2-7c971ea27ff4] Running
	I1123 10:10:17.109935  249292 system_pods.go:61] "etcd-pause-528307" [d9f542bc-1732-42da-8fb4-cca3b910dbd8] Running
	I1123 10:10:17.109940  249292 system_pods.go:61] "kindnet-mh9dq" [45e79ade-c9ae-4302-b726-b23f6a52c9ff] Running
	I1123 10:10:17.109946  249292 system_pods.go:61] "kube-apiserver-pause-528307" [662044ca-d2ca-4362-9784-c882f2824c63] Running
	I1123 10:10:17.109952  249292 system_pods.go:61] "kube-controller-manager-pause-528307" [29f640fa-d5fc-41c6-9073-52f21609dfa8] Running
	I1123 10:10:17.109960  249292 system_pods.go:61] "kube-proxy-jgn4v" [6a8ef8ed-fd78-4dab-b4f2-d48f8e87169e] Running
	I1123 10:10:17.109963  249292 system_pods.go:61] "kube-scheduler-pause-528307" [8f258b36-44c3-46cd-bca4-2e3f836cdb4d] Running
	I1123 10:10:17.109970  249292 system_pods.go:74] duration metric: took 3.571281ms to wait for pod list to return data ...
	I1123 10:10:17.109984  249292 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:17.112040  249292 default_sa.go:45] found service account: "default"
	I1123 10:10:17.112063  249292 default_sa.go:55] duration metric: took 2.070706ms for default service account to be created ...
	I1123 10:10:17.112073  249292 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:17.116650  249292 system_pods.go:86] 7 kube-system pods found
	I1123 10:10:17.116679  249292 system_pods.go:89] "coredns-66bc5c9577-nnglq" [9f32f9bd-be71-440e-a1a2-7c971ea27ff4] Running
	I1123 10:10:17.116687  249292 system_pods.go:89] "etcd-pause-528307" [d9f542bc-1732-42da-8fb4-cca3b910dbd8] Running
	I1123 10:10:17.116692  249292 system_pods.go:89] "kindnet-mh9dq" [45e79ade-c9ae-4302-b726-b23f6a52c9ff] Running
	I1123 10:10:17.116697  249292 system_pods.go:89] "kube-apiserver-pause-528307" [662044ca-d2ca-4362-9784-c882f2824c63] Running
	I1123 10:10:17.116703  249292 system_pods.go:89] "kube-controller-manager-pause-528307" [29f640fa-d5fc-41c6-9073-52f21609dfa8] Running
	I1123 10:10:17.116707  249292 system_pods.go:89] "kube-proxy-jgn4v" [6a8ef8ed-fd78-4dab-b4f2-d48f8e87169e] Running
	I1123 10:10:17.116713  249292 system_pods.go:89] "kube-scheduler-pause-528307" [8f258b36-44c3-46cd-bca4-2e3f836cdb4d] Running
	I1123 10:10:17.116722  249292 system_pods.go:126] duration metric: took 4.641327ms to wait for k8s-apps to be running ...
	I1123 10:10:17.116739  249292 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:17.116790  249292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:17.134213  249292 system_svc.go:56] duration metric: took 17.455018ms WaitForService to wait for kubelet
	I1123 10:10:17.134251  249292 kubeadm.go:587] duration metric: took 236.066754ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:17.134283  249292 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:17.137677  249292 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:10:17.137712  249292 node_conditions.go:123] node cpu capacity is 8
	I1123 10:10:17.137729  249292 node_conditions.go:105] duration metric: took 3.439254ms to run NodePressure ...
	I1123 10:10:17.137745  249292 start.go:242] waiting for startup goroutines ...
	I1123 10:10:17.137754  249292 start.go:247] waiting for cluster config update ...
	I1123 10:10:17.137765  249292 start.go:256] writing updated cluster config ...
	I1123 10:10:17.138143  249292 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:17.143124  249292 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:17.143684  249292 kapi.go:59] client config for pause-528307: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/profiles/pause-528307/client.key", CAFile:"/home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:10:17.146812  249292 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnglq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.152165  249292 pod_ready.go:94] pod "coredns-66bc5c9577-nnglq" is "Ready"
	I1123 10:10:17.152195  249292 pod_ready.go:86] duration metric: took 5.358234ms for pod "coredns-66bc5c9577-nnglq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.154755  249292 pod_ready.go:83] waiting for pod "etcd-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.159547  249292 pod_ready.go:94] pod "etcd-pause-528307" is "Ready"
	I1123 10:10:17.159576  249292 pod_ready.go:86] duration metric: took 4.794576ms for pod "etcd-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.161838  249292 pod_ready.go:83] waiting for pod "kube-apiserver-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.165952  249292 pod_ready.go:94] pod "kube-apiserver-pause-528307" is "Ready"
	I1123 10:10:17.166015  249292 pod_ready.go:86] duration metric: took 4.153656ms for pod "kube-apiserver-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.167966  249292 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.547997  249292 pod_ready.go:94] pod "kube-controller-manager-pause-528307" is "Ready"
	I1123 10:10:17.548034  249292 pod_ready.go:86] duration metric: took 380.046729ms for pod "kube-controller-manager-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:17.747583  249292 pod_ready.go:83] waiting for pod "kube-proxy-jgn4v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.147663  249292 pod_ready.go:94] pod "kube-proxy-jgn4v" is "Ready"
	I1123 10:10:18.147691  249292 pod_ready.go:86] duration metric: took 400.076957ms for pod "kube-proxy-jgn4v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.346944  249292 pod_ready.go:83] waiting for pod "kube-scheduler-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.747707  249292 pod_ready.go:94] pod "kube-scheduler-pause-528307" is "Ready"
	I1123 10:10:18.747742  249292 pod_ready.go:86] duration metric: took 400.765022ms for pod "kube-scheduler-pause-528307" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:18.747762  249292 pod_ready.go:40] duration metric: took 1.604591375s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:18.806282  249292 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:10:18.812260  249292 out.go:179] * Done! kubectl is now configured to use "pause-528307" cluster and "default" namespace by default
	I1123 10:10:16.287939  246976 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1123 10:10:16.288016  246976 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:10:16.100647  250596 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:10:16.100921  250596 start.go:159] libmachine.API.Create for "force-systemd-env-465707" (driver="docker")
	I1123 10:10:16.100972  250596 client.go:173] LocalClient.Create starting
	I1123 10:10:16.101103  250596 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:10:16.101152  250596 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:16.101171  250596 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:16.101278  250596 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:10:16.101316  250596 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:16.101335  250596 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:16.101774  250596 cli_runner.go:164] Run: docker network inspect force-systemd-env-465707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:10:16.123765  250596 cli_runner.go:211] docker network inspect force-systemd-env-465707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:10:16.123865  250596 network_create.go:284] running [docker network inspect force-systemd-env-465707] to gather additional debugging logs...
	I1123 10:10:16.123898  250596 cli_runner.go:164] Run: docker network inspect force-systemd-env-465707
	W1123 10:10:16.144704  250596 cli_runner.go:211] docker network inspect force-systemd-env-465707 returned with exit code 1
	I1123 10:10:16.144744  250596 network_create.go:287] error running [docker network inspect force-systemd-env-465707]: docker network inspect force-systemd-env-465707: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-465707 not found
	I1123 10:10:16.144762  250596 network_create.go:289] output of [docker network inspect force-systemd-env-465707]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-465707 not found
	
	** /stderr **
	I1123 10:10:16.144894  250596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:16.167573  250596 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:10:16.168226  250596 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:10:16.168837  250596 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:10:16.169524  250596 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cce7abf031ed IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:5f:a9:f8:75:ce} reservation:<nil>}
	I1123 10:10:16.170483  250596 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e35ec0}
	I1123 10:10:16.170512  250596 network_create.go:124] attempt to create docker network force-systemd-env-465707 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:10:16.170580  250596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-465707 force-systemd-env-465707
	I1123 10:10:16.231514  250596 network_create.go:108] docker network force-systemd-env-465707 192.168.85.0/24 created
	I1123 10:10:16.231551  250596 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-465707" container
	I1123 10:10:16.231636  250596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:10:16.253263  250596 cli_runner.go:164] Run: docker volume create force-systemd-env-465707 --label name.minikube.sigs.k8s.io=force-systemd-env-465707 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:10:16.276289  250596 oci.go:103] Successfully created a docker volume force-systemd-env-465707
	I1123 10:10:16.276392  250596 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-465707-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-465707 --entrypoint /usr/bin/test -v force-systemd-env-465707:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:10:17.051622  250596 oci.go:107] Successfully prepared a docker volume force-systemd-env-465707
	I1123 10:10:17.051715  250596 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:17.051733  250596 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:10:17.051816  250596 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-465707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:10:23.175680  240307 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-417054-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-417054 --entrypoint /usr/bin/test -v missing-upgrade-417054:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (20.374665909s)
	I1123 10:10:23.175701  240307 oci.go:107] Successfully prepared a docker volume missing-upgrade-417054
	I1123 10:10:23.175723  240307 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1123 10:10:23.175745  240307 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:10:23.175799  240307 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-417054:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.217774265Z" level=info msg="RDT not available in the host system"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.217787743Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.218791549Z" level=info msg="Conmon does support the --sync option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.218818977Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.2188387Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.219653094Z" level=info msg="Conmon does support the --sync option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.219674824Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224050814Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224079449Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.224886769Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.225586397Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.225651697Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.393481266Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-nnglq Namespace:kube-system ID:27190eb86e4b14a93c6a425ae9a569ecd6400df16cc523bd12073f6549002674 UID:9f32f9bd-be71-440e-a1a2-7c971ea27ff4 NetNS:/var/run/netns/f4e362c5-100d-45d4-ace1-d13ca8f74002 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009840b8}] Aliases:map[]}"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.393662395Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-nnglq for CNI network kindnet (type=ptp)"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394142728Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394171141Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394243647Z" level=info msg="Create NRI interface"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394355852Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.39437114Z" level=info msg="runtime interface created"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394384783Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394392672Z" level=info msg="runtime interface starting up..."
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394400917Z" level=info msg="starting plugins..."
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394416154Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 10:10:15 pause-528307 crio[2162]: time="2025-11-23T10:10:15.394748091Z" level=info msg="No systemd watchdog enabled"
	Nov 23 10:10:15 pause-528307 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1ad0225f3a144       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   0                   27190eb86e4b1       coredns-66bc5c9577-nnglq               kube-system
	d28ec85d418ca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   29 seconds ago      Running             kindnet-cni               0                   6ebd779429110       kindnet-mh9dq                          kube-system
	ac99950a7e098       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   29 seconds ago      Running             kube-proxy                0                   178ce4605b1e9       kube-proxy-jgn4v                       kube-system
	a2749d18f881b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   40 seconds ago      Running             kube-controller-manager   0                   683273d014237       kube-controller-manager-pause-528307   kube-system
	244846b68f3f5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   40 seconds ago      Running             kube-apiserver            0                   7d95a91920606       kube-apiserver-pause-528307            kube-system
	c36df78f28a7c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago      Running             kube-scheduler            0                   ad0d693a3f027       kube-scheduler-pause-528307            kube-system
	d5bd829f0253e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   40 seconds ago      Running             etcd                      0                   1092e0f415f91       etcd-pause-528307                      kube-system
	
	
	==> coredns [1ad0225f3a144eac06e8e40d1cd14563020304d1992e30a1c3e13dfba44ea7f8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33569 - 21917 "HINFO IN 3934179666926209465.5054150719438489396. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041515213s
	
	
	==> describe nodes <==
	Name:               pause-528307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-528307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=pause-528307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_09_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-528307
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:10:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:09:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:10:10 +0000   Sun, 23 Nov 2025 10:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-528307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                878524ca-ae1f-4dff-a09b-2d2512c29616
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nnglq                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-pause-528307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-mh9dq                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-pause-528307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-pause-528307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-jgn4v                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-pause-528307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node pause-528307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node pause-528307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node pause-528307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node pause-528307 event: Registered Node pause-528307 in Controller
	  Normal  NodeReady                19s   kubelet          Node pause-528307 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:25] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.037608] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023966] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +2.048049] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	
	
	==> etcd [d5bd829f0253e0ab67569dd8947f9e0594c58c0202a535b8f4a14e048463283c] <==
	{"level":"warn","ts":"2025-11-23T10:09:47.041962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:47.049689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:47.119349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:54.748179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.512334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-23T10:09:54.748268Z","caller":"traceutil/trace.go:172","msg":"trace[301164747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:296; }","duration":"115.632574ms","start":"2025-11-23T10:09:54.632619Z","end":"2025-11-23T10:09:54.748251Z","steps":["trace[301164747] 'agreement among raft nodes before linearized reading'  (duration: 97.283944ms)","trace[301164747] 'range keys from in-memory index tree'  (duration: 18.127171ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:09:54.748281Z","caller":"traceutil/trace.go:172","msg":"trace[1292088339] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"153.902543ms","start":"2025-11-23T10:09:54.594353Z","end":"2025-11-23T10:09:54.748255Z","steps":["trace[1292088339] 'process raft request'  (duration: 135.595316ms)","trace[1292088339] 'compare'  (duration: 18.104584ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:09:54.748310Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.450961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/pause-528307\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-11-23T10:09:54.748361Z","caller":"traceutil/trace.go:172","msg":"trace[1410084704] range","detail":"{range_begin:/registry/leases/kube-node-lease/pause-528307; range_end:; response_count:1; response_revision:297; }","duration":"113.507306ms","start":"2025-11-23T10:09:54.634842Z","end":"2025-11-23T10:09:54.748349Z","steps":["trace[1410084704] 'agreement among raft nodes before linearized reading'  (duration: 113.376917ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:04.522549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.52702ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837615740364 > lease_revoke:<id:59069ab030e323f8>","response":"size:28"}
	{"level":"warn","ts":"2025-11-23T10:10:07.547413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.152775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-nnglq\" limit:1 ","response":"range_response_count:1 size:5545"}
	{"level":"info","ts":"2025-11-23T10:10:07.547486Z","caller":"traceutil/trace.go:172","msg":"trace[1265247078] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-nnglq; range_end:; response_count:1; response_revision:391; }","duration":"169.23842ms","start":"2025-11-23T10:10:07.378230Z","end":"2025-11-23T10:10:07.547468Z","steps":["trace[1265247078] 'range keys from in-memory index tree'  (duration: 168.972864ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:07.547628Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.886698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T10:10:07.547738Z","caller":"traceutil/trace.go:172","msg":"trace[427600288] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:391; }","duration":"199.004203ms","start":"2025-11-23T10:10:07.348721Z","end":"2025-11-23T10:10:07.547725Z","steps":["trace[427600288] 'range keys from in-memory index tree'  (duration: 198.78509ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.733506Z","caller":"traceutil/trace.go:172","msg":"trace[2034828500] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"177.638506ms","start":"2025-11-23T10:10:07.555841Z","end":"2025-11-23T10:10:07.733479Z","steps":["trace[2034828500] 'process raft request'  (duration: 177.546366ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894589Z","caller":"traceutil/trace.go:172","msg":"trace[2051461231] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"152.080948ms","start":"2025-11-23T10:10:07.742486Z","end":"2025-11-23T10:10:07.894567Z","steps":["trace[2051461231] 'process raft request'  (duration: 152.042484ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894679Z","caller":"traceutil/trace.go:172","msg":"trace[1040618974] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"156.727443ms","start":"2025-11-23T10:10:07.737931Z","end":"2025-11-23T10:10:07.894659Z","steps":["trace[1040618974] 'process raft request'  (duration: 156.54679ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:07.894705Z","caller":"traceutil/trace.go:172","msg":"trace[2037295264] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"156.816247ms","start":"2025-11-23T10:10:07.737867Z","end":"2025-11-23T10:10:07.894683Z","steps":["trace[2037295264] 'process raft request'  (duration: 59.367872ms)","trace[2037295264] 'compare'  (duration: 97.134973ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:10:08.035872Z","caller":"traceutil/trace.go:172","msg":"trace[1903948012] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"134.617317ms","start":"2025-11-23T10:10:07.901239Z","end":"2025-11-23T10:10:08.035856Z","steps":["trace[1903948012] 'process raft request'  (duration: 134.562726ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:08.035888Z","caller":"traceutil/trace.go:172","msg":"trace[1541417610] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"135.699327ms","start":"2025-11-23T10:10:07.900164Z","end":"2025-11-23T10:10:08.035864Z","steps":["trace[1541417610] 'process raft request'  (duration: 108.646019ms)","trace[1541417610] 'compare'  (duration: 26.904879ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:10:08.197200Z","caller":"traceutil/trace.go:172","msg":"trace[184438368] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"116.540425ms","start":"2025-11-23T10:10:08.080642Z","end":"2025-11-23T10:10:08.197182Z","steps":["trace[184438368] 'process raft request'  (duration: 116.485201ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:10:08.197282Z","caller":"traceutil/trace.go:172","msg":"trace[890774198] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"153.492432ms","start":"2025-11-23T10:10:08.043777Z","end":"2025-11-23T10:10:08.197269Z","steps":["trace[890774198] 'process raft request'  (duration: 92.049271ms)","trace[890774198] 'compare'  (duration: 61.140163ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:10:08.463012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.127495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-528307\" limit:1 ","response":"range_response_count:1 size:7283"}
	{"level":"info","ts":"2025-11-23T10:10:08.463071Z","caller":"traceutil/trace.go:172","msg":"trace[1553662466] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-528307; range_end:; response_count:1; response_revision:400; }","duration":"161.200039ms","start":"2025-11-23T10:10:08.301859Z","end":"2025-11-23T10:10:08.463059Z","steps":["trace[1553662466] 'range keys from in-memory index tree'  (duration: 160.974494ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:10:08.463024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.730238ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T10:10:08.463128Z","caller":"traceutil/trace.go:172","msg":"trace[1365454372] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:400; }","duration":"114.824137ms","start":"2025-11-23T10:10:08.348286Z","end":"2025-11-23T10:10:08.463111Z","steps":["trace[1365454372] 'range keys from in-memory index tree'  (duration: 114.604074ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:10:25 up  2:52,  0 user,  load average: 5.29, 1.81, 1.20
	Linux pause-528307 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d28ec85d418ca8e19a54b4f89de49657fde77bd215e94a2df5dd6926463e3be2] <==
	I1123 10:09:56.413213       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:09:56.413478       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:09:56.413629       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:09:56.413649       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:09:56.413663       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:09:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:09:56.613252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:09:56.741434       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:09:56.741463       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:09:56.741633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:09:57.012215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:09:57.012243       1 metrics.go:72] Registering metrics
	I1123 10:09:57.012285       1 controller.go:711] "Syncing nftables rules"
	I1123 10:10:06.615415       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:06.615475       1 main.go:301] handling current node
	I1123 10:10:16.613247       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:16.613290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [244846b68f3f5bc25776d5a1acdbcfdcf54f1966e1908fe00aef4c21b33f79a8] <==
	I1123 10:09:47.714379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:09:47.714386       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:09:47.715183       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:09:47.716041       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:09:47.716674       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.721406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.724514       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:09:47.744939       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:09:48.618462       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:09:48.622398       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:09:48.622416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:09:49.124351       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:09:49.168065       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:09:49.222064       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:09:49.227781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:09:49.228730       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:09:49.232719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:09:49.636150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:09:50.398980       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:09:50.410454       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:09:50.422894       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:09:55.365691       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:09:55.541476       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:55.547757       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:55.739216       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a2749d18f881b5e92bc48e60abe4ffbee39700a0e7bb488c9684767788ec399d] <==
	I1123 10:09:54.635223       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:09:54.635246       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:09:54.635270       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:09:54.635349       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:09:54.635463       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:09:54.636693       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:09:54.636764       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:09:54.639017       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:09:54.639042       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:09:54.639047       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:09:54.641298       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.642461       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 10:09:54.644643       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:09:54.646870       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:09:54.649030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:09:54.649071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:09:54.649097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:09:54.650334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:09:54.653560       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:09:54.653591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:09:54.656255       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.660047       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:09:54.676317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:09:54.749752       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-528307" podCIDRs=["10.244.0.0/24"]
	I1123 10:10:09.594204       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ac99950a7e098e323bfee248673e4c31ba37425f1790766ec3dc49bec892737e] <==
	I1123 10:09:56.179262       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:09:56.235685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:09:56.336252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:09:56.336296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:09:56.336378       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:09:56.358196       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:09:56.358268       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:09:56.364630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:09:56.365116       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:09:56.365146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:56.366523       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:09:56.366553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:09:56.366632       1 config.go:200] "Starting service config controller"
	I1123 10:09:56.366642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:09:56.366663       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:09:56.366671       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:09:56.366685       1 config.go:309] "Starting node config controller"
	I1123 10:09:56.366690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:09:56.466907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:09:56.466940       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:09:56.466968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:09:56.466949       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c36df78f28a7ca903cc5bf44bda92b9e4c12e3a38a41fea5d8f9e265a7a9fb0b] <==
	E1123 10:09:47.682240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:09:47.682548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:09:47.683537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:09:47.683730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:47.683812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:09:47.683841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:09:47.683865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:09:47.683945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:09:47.684370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:09:47.684967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:09:48.562370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:09:48.611266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:09:48.633574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:09:48.680881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:09:48.740295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:09:48.756449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:09:48.773659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:09:48.843066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:09:48.886811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:09:48.916178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:48.919223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:09:48.940402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:09:48.967740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:09:48.970297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1123 10:09:50.978696       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.321577    1315 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.321595    1315 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385413    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385470    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:12 pause-528307 kubelet[1315]: E1123 10:10:12.385487    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.321497    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321599    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321662    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321678    1315 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.321690    1315 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390299    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390355    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: E1123 10:10:14.390367    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.422649    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.598622    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:14 pause-528307 kubelet[1315]: W1123 10:10:14.870660    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.330906    1315 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.331059    1315 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390587    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390649    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:15 pause-528307 kubelet[1315]: E1123 10:10:15.390665    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 10:10:19 pause-528307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:10:19 pause-528307 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:10:19 pause-528307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:10:19 pause-528307 systemd[1]: kubelet.service: Consumed 1.292s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-528307 -n pause-528307
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-528307 -n pause-528307: exit status 2 (328.608779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-528307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.084659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-990757 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-990757 describe deploy/metrics-server -n kube-system: exit status 1 (101.529975ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-990757 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-990757
helpers_test.go:243: (dbg) docker inspect old-k8s-version-990757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	        "Created": "2025-11-23T10:15:48.885853944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:15:48.931416754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0-json.log",
	        "Name": "/old-k8s-version-990757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-990757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-990757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	                "LowerDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-990757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-990757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-990757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "358e6e3b491eb70e016a56b7605ed4cd5f15283fb02a58b5e3fcf1e361cfee14",
	            "SandboxKey": "/var/run/docker/netns/358e6e3b491e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-990757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "052388d40ecf9cf5a4a04b634ec5fc574a97435df4a8b65c1a426a6b8091971d",
	                    "EndpointID": "6726d2de3d1764ad9aa2f829c73edf974926d8ba0d696a33e3098e8be275d08d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3e:c5:14:3e:5b:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-990757",
	                        "fd35c6e2de37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25: (1.136375049s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-791161 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/kubernetes/kubelet.conf                                                                                                │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /var/lib/kubelet/config.yaml                                                                                                │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status docker --all --full --no-pager                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo docker system info                                                                                                              │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cri-dockerd --version                                                                                                           │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo containerd config dump                                                                                                          │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo crio config                                                                                                                     │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p flannel-791161                                                                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-412306     │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p bridge-791161 pgrep -a kubelet                                                                                                                      │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-990757 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:09.384488  356138 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:09.384651  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384664  356138 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:09.384670  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384941  356138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:16:09.385666  356138 out.go:368] Setting JSON to false
	I1123 10:16:09.387494  356138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10710,"bootTime":1763882259,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:16:09.387583  356138 start.go:143] virtualization: kvm guest
	I1123 10:16:09.389675  356138 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:16:09.391215  356138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:16:09.391256  356138 notify.go:221] Checking for updates...
	I1123 10:16:09.393259  356138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:09.394603  356138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:09.395803  356138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:16:09.397054  356138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:16:09.398810  356138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:16:09.400667  356138 config.go:182] Loaded profile config "bridge-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400825  356138 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400980  356138 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:09.401117  356138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:09.431550  356138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:16:09.431721  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.501610  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.486961066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.501769  356138 docker.go:319] overlay module found
	I1123 10:16:09.503502  356138 out.go:179] * Using the docker driver based on user configuration
	I1123 10:16:08.932406  341630 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:08.932428  341630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:08.932485  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.962254  341630 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:08.962286  341630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:08.962357  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.969489  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:08.986744  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:09.003812  341630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:09.056864  341630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:09.090911  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:09.108517  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:09.226531  341630 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:09.228833  341630 node_ready.go:35] waiting up to 15m0s for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245324  341630 node_ready.go:49] node "bridge-791161" is "Ready"
	I1123 10:16:09.245361  341630 node_ready.go:38] duration metric: took 16.394308ms for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245379  341630 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:09.245433  341630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:09.502654  341630 api_server.go:72] duration metric: took 602.591604ms to wait for apiserver process to appear ...
	I1123 10:16:09.502681  341630 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:09.502706  341630 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 10:16:09.509263  341630 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 10:16:09.510155  341630 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:09.504848  356138 start.go:309] selected driver: docker
	I1123 10:16:09.504864  356138 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:09.504878  356138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:16:09.505666  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.570314  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.560155745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.570532  356138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:09.570826  356138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:09.572359  356138 out.go:179] * Using Docker driver with root privileges
	I1123 10:16:09.573651  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:09.573735  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:09.573748  356138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:09.573829  356138 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:09.575056  356138 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:16:09.576077  356138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:09.577197  356138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:09.578314  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:09.578350  356138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:16:09.578363  356138 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:09.578405  356138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:09.578475  356138 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:16:09.578490  356138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:16:09.578607  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:09.578632  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json: {Name:mk1fd6c8c1b8c2c18e5b4ea57dc46155bd997340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:09.603731  356138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:16:09.603757  356138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:16:09.603773  356138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:16:09.603816  356138 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:16:09.603920  356138 start.go:364] duration metric: took 78.804µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:16:09.603953  356138 start.go:93] Provisioning new machine with config: &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:09.604048  356138 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:16:09.510617  341630 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:09.510639  341630 api_server.go:131] duration metric: took 7.9515ms to wait for apiserver health ...
	I1123 10:16:09.510646  341630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:09.511774  341630 addons.go:530] duration metric: took 611.647616ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:09.513306  341630 system_pods.go:59] 6 kube-system pods found
	I1123 10:16:09.513342  341630 system_pods.go:61] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.513353  341630 system_pods.go:61] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.513367  341630 system_pods.go:61] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.513379  341630 system_pods.go:61] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.513388  341630 system_pods.go:61] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.513400  341630 system_pods.go:61] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.513408  341630 system_pods.go:74] duration metric: took 2.755326ms to wait for pod list to return data ...
	I1123 10:16:09.513421  341630 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:09.515529  341630 default_sa.go:45] found service account: "default"
	I1123 10:16:09.515550  341630 default_sa.go:55] duration metric: took 2.122813ms for default service account to be created ...
	I1123 10:16:09.515559  341630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:09.517664  341630 system_pods.go:86] 6 kube-system pods found
	I1123 10:16:09.517695  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.517709  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.517719  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.517731  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.517738  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.517746  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.517783  341630 retry.go:31] will retry after 269.045888ms: missing components: kube-dns, kube-proxy
	I1123 10:16:09.732517  341630 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-791161" context rescaled to 1 replicas
	I1123 10:16:09.792357  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:09.792401  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792413  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792424  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.792436  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.792446  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.792463  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.792475  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.792483  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.792509  341630 retry.go:31] will retry after 270.754186ms: missing components: kube-dns, kube-proxy
	I1123 10:16:10.068331  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.068370  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068381  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068391  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.068400  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.068409  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.068430  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.068443  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.068450  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:10.068477  341630 retry.go:31] will retry after 429.754148ms: missing components: kube-dns
	I1123 10:16:10.503386  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.503419  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503426  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503433  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.503438  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.503444  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.503448  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.503451  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.503454  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.503470  341630 retry.go:31] will retry after 408.73206ms: missing components: kube-dns
	I1123 10:16:10.917355  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.917398  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917410  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917420  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.917429  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.917451  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.917465  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.917474  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.917478  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.917500  341630 retry.go:31] will retry after 552.289133ms: missing components: kube-dns
	I1123 10:16:09.278883  344952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:09.372128  344952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:09.619893  344952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:10.283551  344952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:10.867997  344952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:10.868330  344952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:10.989337  344952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:10.989485  344952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:11.169439  344952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:11.400232  344952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:11.647348  344952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:11.647533  344952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:11.771440  344952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:12.267757  344952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:12.654977  344952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:12.947814  344952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:13.078046  344952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:13.078626  344952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:13.136374  344952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:08.666124  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.166689  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.666832  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.166752  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.666681  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.165984  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.166196  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.666342  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.166030  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.195964  344952 out.go:252]   - Booting up control plane ...
	I1123 10:16:13.196155  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:13.196274  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:13.196362  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:13.196492  344952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:13.196611  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:13.196738  344952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:13.197029  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:13.197260  344952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:13.266865  344952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:13.267069  344952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:09.606473  356138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:16:09.606832  356138 start.go:159] libmachine.API.Create for "embed-certs-412306" (driver="docker")
	I1123 10:16:09.606885  356138 client.go:173] LocalClient.Create starting
	I1123 10:16:09.607022  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:16:09.607067  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607113  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607181  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:16:09.607208  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607233  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607683  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:16:09.629449  356138 cli_runner.go:211] docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:16:09.629532  356138 network_create.go:284] running [docker network inspect embed-certs-412306] to gather additional debugging logs...
	I1123 10:16:09.629558  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306
	W1123 10:16:09.649505  356138 cli_runner.go:211] docker network inspect embed-certs-412306 returned with exit code 1
	I1123 10:16:09.649534  356138 network_create.go:287] error running [docker network inspect embed-certs-412306]: docker network inspect embed-certs-412306: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-412306 not found
	I1123 10:16:09.649551  356138 network_create.go:289] output of [docker network inspect embed-certs-412306]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-412306 not found
	
	** /stderr **
	I1123 10:16:09.649693  356138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:09.668995  356138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:16:09.669799  356138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:16:09.670740  356138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:16:09.671473  356138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-052388d40ecf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:97:1c:bc:d1:b9} reservation:<nil>}
	I1123 10:16:09.672185  356138 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0caff4f103e2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:ae:32:4b:cf:65} reservation:<nil>}
	I1123 10:16:09.676786  356138 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d5fec0}
	I1123 10:16:09.676832  356138 network_create.go:124] attempt to create docker network embed-certs-412306 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 10:16:09.676908  356138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-412306 embed-certs-412306
	I1123 10:16:09.737193  356138 network_create.go:108] docker network embed-certs-412306 192.168.94.0/24 created
	I1123 10:16:09.737241  356138 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-412306" container
	I1123 10:16:09.737307  356138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:16:09.758160  356138 cli_runner.go:164] Run: docker volume create embed-certs-412306 --label name.minikube.sigs.k8s.io=embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:16:09.779650  356138 oci.go:103] Successfully created a docker volume embed-certs-412306
	I1123 10:16:09.779742  356138 cli_runner.go:164] Run: docker run --rm --name embed-certs-412306-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --entrypoint /usr/bin/test -v embed-certs-412306:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:16:10.255390  356138 oci.go:107] Successfully prepared a docker volume embed-certs-412306
	I1123 10:16:10.255455  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:10.255469  356138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:16:10.255530  356138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:16:11.474871  341630 system_pods.go:86] 7 kube-system pods found
	I1123 10:16:11.474914  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:11.474924  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running
	I1123 10:16:11.474945  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:11.474955  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:11.474961  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:11.474968  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:11.474973  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:11.474984  341630 system_pods.go:126] duration metric: took 1.959418216s to wait for k8s-apps to be running ...
	I1123 10:16:11.474994  341630 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:11.475054  341630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:11.489403  341630 system_svc.go:56] duration metric: took 14.399252ms WaitForService to wait for kubelet
	I1123 10:16:11.489444  341630 kubeadm.go:587] duration metric: took 2.58938325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:11.489470  341630 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:11.492755  341630 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:11.492782  341630 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:11.492808  341630 node_conditions.go:105] duration metric: took 3.332237ms to run NodePressure ...
	I1123 10:16:11.492820  341630 start.go:242] waiting for startup goroutines ...
	I1123 10:16:11.492829  341630 start.go:247] waiting for cluster config update ...
	I1123 10:16:11.492840  341630 start.go:256] writing updated cluster config ...
	I1123 10:16:11.493117  341630 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:11.497127  341630 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:11.501040  341630 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:16:13.507081  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:15.507577  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:13.666736  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.166653  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.666411  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.166345  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.665938  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.166765  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.166588  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.665914  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.166076  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.250824  344706 kubeadm.go:1114] duration metric: took 12.162789359s to wait for elevateKubeSystemPrivileges
	I1123 10:16:18.250873  344706 kubeadm.go:403] duration metric: took 24.23117455s to StartCluster
	I1123 10:16:18.250896  344706 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.250984  344706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:18.252313  344706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.252591  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:18.252586  344706 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:18.252625  344706 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:18.252726  344706 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252748  344706 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-990757"
	I1123 10:16:18.252763  344706 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252783  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.252788  344706 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-990757"
	I1123 10:16:18.252794  344706 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:18.253185  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.253439  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.256225  344706 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:18.257663  344706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.278672  344706 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-990757"
	I1123 10:16:18.278725  344706 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:14.780767  356138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.525179702s)
	I1123 10:16:14.780809  356138 kic.go:203] duration metric: took 4.525336925s to extract preloaded images to volume ...
	W1123 10:16:14.780917  356138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:16:14.780972  356138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:16:14.781025  356138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:16:14.851187  356138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-412306 --name embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-412306 --network embed-certs-412306 --ip 192.168.94.2 --volume embed-certs-412306:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:16:15.210434  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Running}}
	I1123 10:16:15.236308  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.262410  356138 cli_runner.go:164] Run: docker exec embed-certs-412306 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:16:15.312245  356138 oci.go:144] the created container "embed-certs-412306" has a running status.
	I1123 10:16:15.312287  356138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa...
	I1123 10:16:15.508167  356138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:16:15.538609  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.568324  356138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:16:15.568357  356138 kic_runner.go:114] Args: [docker exec --privileged embed-certs-412306 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:16:15.633555  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.657069  356138 machine.go:94] provisionDockerMachine start ...
	I1123 10:16:15.657228  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.682778  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.683182  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.683211  356138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:16:15.834361  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:15.834394  356138 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:16:15.834460  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.855149  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.855386  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.855408  356138 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:16:16.024669  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:16.024755  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.048672  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.048986  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.049013  356138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:16:16.203231  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:16:16.203261  356138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:16:16.203307  356138 ubuntu.go:190] setting up certificates
	I1123 10:16:16.203329  356138 provision.go:84] configureAuth start
	I1123 10:16:16.203397  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.224391  356138 provision.go:143] copyHostCerts
	I1123 10:16:16.224466  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:16:16.224486  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:16:16.224568  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:16:16.224688  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:16:16.224702  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:16:16.224741  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:16:16.224838  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:16:16.224850  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:16:16.224885  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:16:16.224961  356138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:16:16.252659  356138 provision.go:177] copyRemoteCerts
	I1123 10:16:16.252799  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:16:16.252862  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.274900  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.381909  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:16:16.403354  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:16:16.421969  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:16:16.443591  356138 provision.go:87] duration metric: took 240.241648ms to configureAuth
	I1123 10:16:16.443629  356138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:16:16.443817  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:16.443936  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.464697  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.465000  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.465026  356138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:16:16.768631  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:16:16.768659  356138 machine.go:97] duration metric: took 1.11155421s to provisionDockerMachine
	I1123 10:16:16.768671  356138 client.go:176] duration metric: took 7.161774198s to LocalClient.Create
	I1123 10:16:16.768695  356138 start.go:167] duration metric: took 7.161866501s to libmachine.API.Create "embed-certs-412306"
	I1123 10:16:16.768705  356138 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:16:16.768716  356138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:16:16.768980  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:16:16.769049  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.800429  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.927787  356138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:16:16.931545  356138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:16:16.931591  356138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:16:16.931614  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:16:16.931671  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:16:16.931739  356138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:16:16.931823  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:16:16.939473  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:16.959179  356138 start.go:296] duration metric: took 190.46241ms for postStartSetup
	I1123 10:16:16.959501  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.984276  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:16.984618  356138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:16:16.984693  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.006779  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.112458  356138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:16:17.117106  356138 start.go:128] duration metric: took 7.513028342s to createHost
	I1123 10:16:17.117133  356138 start.go:83] releasing machines lock for "embed-certs-412306", held for 7.513197957s
	I1123 10:16:17.117208  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:17.134501  356138 ssh_runner.go:195] Run: cat /version.json
	I1123 10:16:17.134547  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.134586  356138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:16:17.134662  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.153344  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.153649  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.310865  356138 ssh_runner.go:195] Run: systemctl --version
	I1123 10:16:17.317393  356138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:16:17.352355  356138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:16:17.357116  356138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:16:17.357180  356138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:16:17.382356  356138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:16:17.382379  356138 start.go:496] detecting cgroup driver to use...
	I1123 10:16:17.382409  356138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:16:17.382462  356138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:16:17.398562  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:16:17.411069  356138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:16:17.411138  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:16:17.427203  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:16:17.444861  356138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:16:17.530800  356138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:16:17.622946  356138 docker.go:234] disabling docker service ...
	I1123 10:16:17.623025  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:16:17.641931  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:16:17.654457  356138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:16:17.747652  356138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:16:17.845810  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:16:17.858620  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:16:17.875812  356138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:16:17.875880  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.888305  356138 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:16:17.888379  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.899801  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.911635  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.923072  356138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:16:17.932765  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.945022  356138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.962784  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.974698  356138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:16:17.984798  356138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:16:17.994564  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.110636  356138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:16:18.290560  356138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:16:18.290681  356138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:16:18.297099  356138 start.go:564] Will wait 60s for crictl version
	I1123 10:16:18.297225  356138 ssh_runner.go:195] Run: which crictl
	I1123 10:16:18.304375  356138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:16:18.348465  356138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:16:18.348551  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.389627  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.430444  356138 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:16:18.278756  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.279376  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.279793  344706 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.279857  344706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:18.280007  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.306787  344706 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:18.306810  344706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:18.306871  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.316758  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.336999  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.367903  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:18.433504  344706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.466536  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.470919  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:14.268571  344952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001808065s
	I1123 10:16:14.273043  344952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:14.273189  344952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:16:14.273313  344952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:14.273420  344952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:16.059724  344952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.786566479s
	I1123 10:16:16.921595  344952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.648519148s
	I1123 10:16:18.777367  344952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.504051541s
	I1123 10:16:18.794664  344952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:18.805590  344952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:18.816203  344952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:18.816513  344952 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-541522 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:18.824772  344952 kubeadm.go:319] [bootstrap-token] Using token: mhptlw.q9ng0jhdmffx1zol
	I1123 10:16:18.826026  344952 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:18.826262  344952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:18.830334  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:18.838855  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:18.843285  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:18.845986  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:18.848662  344952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:18.647290  344706 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:18.648399  344706 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:18.933557  344706 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:18.431580  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:18.458210  356138 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:16:18.464771  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.479461  356138 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:16:18.479617  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:18.479685  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.535015  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.535043  356138 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:16:18.535112  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.576193  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.576222  356138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:16:18.576333  356138 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:16:18.576476  356138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:16:18.576564  356138 ssh_runner.go:195] Run: crio config
	I1123 10:16:18.633738  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:18.633768  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:18.633790  356138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:16:18.633824  356138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:16:18.633989  356138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:16:18.634064  356138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:16:18.647059  356138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:16:18.647172  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:16:18.658381  356138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:16:18.675184  356138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:16:18.696460  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:16:18.712392  356138 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:16:18.717832  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.731391  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.841960  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.878215  356138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:16:18.878238  356138 certs.go:195] generating shared ca certs ...
	I1123 10:16:18.878258  356138 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.878425  356138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:16:18.878475  356138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:16:18.878488  356138 certs.go:257] generating profile certs ...
	I1123 10:16:18.878556  356138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:16:18.878580  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt with IP's: []
	I1123 10:16:19.147317  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt ...
	I1123 10:16:19.147348  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt: {Name:mkbf59c08f4785d244500114d39649c207c90bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147525  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key ...
	I1123 10:16:19.147545  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key: {Name:mkb75245d2cacd41a4a207ee2cc5a25d4ea8629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147671  356138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:16:19.147694  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1123 10:16:19.174958  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 ...
	I1123 10:16:19.174991  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37: {Name:mk680cab74fc85275258d54871c4d313a4cfa6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175171  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 ...
	I1123 10:16:19.175191  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37: {Name:mk076b1fd9788864d5fa8bfdccf76cb7bad2f09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175299  356138 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt
	I1123 10:16:19.175403  356138 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key
	I1123 10:16:19.175476  356138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:16:19.175494  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt with IP's: []
	I1123 10:16:19.340924  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt ...
	I1123 10:16:19.340952  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt: {Name:mkd487bb2ca9fa1bc04caff7aa2bcbc384decd7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341151  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key ...
	I1123 10:16:19.341173  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key: {Name:mk7c8f5756d2d24a341f272a1597aebf84673b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341385  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:16:19.341439  356138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:16:19.341456  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:16:19.341495  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:16:19.341530  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:16:19.341573  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:16:19.341632  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:19.342348  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:16:19.363830  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:16:19.385303  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:16:19.406023  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:16:19.433442  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:16:19.463003  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:16:19.482783  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:16:19.500070  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:16:19.520265  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:16:19.541432  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:16:19.559861  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:16:19.581528  356138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:16:19.597355  356138 ssh_runner.go:195] Run: openssl version
	I1123 10:16:19.604898  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:16:19.614800  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619006  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619057  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.654890  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:16:19.664327  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:16:19.673063  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676814  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676871  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.721797  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:16:19.730991  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:16:19.739616  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743418  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743475  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.777638  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:16:19.787103  356138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:16:19.790766  356138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:16:19.790816  356138 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:19.790901  356138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:16:19.790939  356138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:16:19.819126  356138 cri.go:89] found id: ""
	I1123 10:16:19.819202  356138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:16:19.827259  356138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:16:19.835053  356138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:16:19.835138  356138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:16:19.842912  356138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:16:19.842928  356138 kubeadm.go:158] found existing configuration files:
	
	I1123 10:16:19.842967  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:16:19.850209  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:16:19.850251  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:16:19.857884  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:16:19.866646  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:16:19.866697  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:16:19.874327  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.881762  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:16:19.881807  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.889164  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:16:19.896714  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:16:19.896758  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:16:19.904290  356138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:16:19.943603  356138 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:16:19.943708  356138 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:16:19.965048  356138 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:16:19.965154  356138 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:16:19.965246  356138 kubeadm.go:319] OS: Linux
	I1123 10:16:19.965327  356138 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:16:19.965405  356138 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:16:19.965481  356138 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:16:19.965573  356138 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:16:19.965644  356138 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:16:19.965732  356138 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:16:19.965823  356138 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:16:19.965891  356138 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:16:20.026266  356138 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:16:20.026438  356138 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:16:20.026607  356138 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:16:20.033615  356138 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:16:19.189076  344952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:19.601794  344952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:20.183417  344952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:20.185182  344952 kubeadm.go:319] 
	I1123 10:16:20.185298  344952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:20.185319  344952 kubeadm.go:319] 
	I1123 10:16:20.185397  344952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:20.185409  344952 kubeadm.go:319] 
	I1123 10:16:20.185430  344952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:20.185517  344952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:20.185598  344952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:20.185607  344952 kubeadm.go:319] 
	I1123 10:16:20.185682  344952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:20.185690  344952 kubeadm.go:319] 
	I1123 10:16:20.185750  344952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:20.185764  344952 kubeadm.go:319] 
	I1123 10:16:20.185817  344952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:20.185945  344952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:20.186023  344952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:20.186032  344952 kubeadm.go:319] 
	I1123 10:16:20.186178  344952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:20.186301  344952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:20.186313  344952 kubeadm.go:319] 
	I1123 10:16:20.186423  344952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.186578  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:20.186625  344952 kubeadm.go:319] 	--control-plane 
	I1123 10:16:20.186634  344952 kubeadm.go:319] 
	I1123 10:16:20.186761  344952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:20.186780  344952 kubeadm.go:319] 
	I1123 10:16:20.186885  344952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.187030  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:20.189698  344952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:20.189890  344952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:20.189921  344952 cni.go:84] Creating CNI manager for ""
	I1123 10:16:20.189943  344952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:20.192370  344952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:18.007511  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:20.508070  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:18.934624  344706 addons.go:530] duration metric: took 681.995047ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:19.151704  344706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-990757" context rescaled to 1 replicas
	W1123 10:16:20.652483  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:23.151550  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:20.035950  356138 out.go:252]   - Generating certificates and keys ...
	I1123 10:16:20.036023  356138 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:16:20.036138  356138 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:16:20.199227  356138 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:20.296867  356138 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:20.649116  356138 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:20.853583  356138 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:21.223354  356138 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:21.223524  356138 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.589454  356138 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:21.589601  356138 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.712733  356138 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:22.231370  356138 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:22.493251  356138 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:22.493387  356138 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:22.795558  356138 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:22.972083  356138 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:23.034642  356138 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:23.345102  356138 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:23.769569  356138 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:23.770179  356138 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:23.773491  356138 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:20.193529  344952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:20.198365  344952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:20.198385  344952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:20.211881  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:20.437045  344952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:20.437128  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.437165  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-541522 minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-541522 minikube.k8s.io/primary=true
	I1123 10:16:20.561626  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.561779  344952 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:21.061993  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:21.561692  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.061999  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.561862  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.062326  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.561744  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.775519  356138 out.go:252]   - Booting up control plane ...
	I1123 10:16:23.775641  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:23.775760  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:23.775870  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:23.790389  356138 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:23.790543  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:23.797027  356138 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:23.797353  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:23.797453  356138 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:23.917379  356138 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:23.917528  356138 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:24.062736  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.562369  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.632270  344952 kubeadm.go:1114] duration metric: took 4.195217058s to wait for elevateKubeSystemPrivileges
	I1123 10:16:24.632308  344952 kubeadm.go:403] duration metric: took 16.142295896s to StartCluster
	I1123 10:16:24.632326  344952 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.632400  344952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:24.633884  344952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.634150  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:24.634179  344952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:24.634251  344952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:24.634355  344952 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:16:24.634368  344952 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:16:24.634377  344952 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	I1123 10:16:24.634388  344952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	I1123 10:16:24.634410  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.634455  344952 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:24.634764  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.634912  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.635539  344952 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:24.636521  344952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:24.657418  344952 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	I1123 10:16:24.657470  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.657938  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.658491  344952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:24.659646  344952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.659666  344952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:24.659724  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.685525  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.690195  344952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.690219  344952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:24.690298  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.724298  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.750701  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:24.796123  344952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:24.848328  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.848334  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.923983  344952 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:24.925356  344952 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:25.228703  344952 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 10:16:23.006965  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.008124  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.154186  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:27.651716  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:25.229824  344952 addons.go:530] duration metric: took 595.565525ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:16:25.428798  344952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-541522" context rescaled to 1 replicas
	W1123 10:16:26.929589  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:24.918996  356138 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001753375s
	I1123 10:16:24.925621  356138 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:24.925735  356138 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1123 10:16:24.925858  356138 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:24.925971  356138 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:26.512191  356138 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.587992193s
	I1123 10:16:27.081491  356138 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.157460492s
	I1123 10:16:28.925636  356138 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001590433s
	I1123 10:16:28.937425  356138 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:28.947025  356138 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:28.955505  356138 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:28.955787  356138 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-412306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:28.963030  356138 kubeadm.go:319] [bootstrap-token] Using token: 2diej7.g3irisej2sfcnkox
	I1123 10:16:28.965317  356138 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:28.965442  356138 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:28.968022  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:28.973224  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:28.975951  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:28.978262  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:28.981645  356138 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:29.331628  356138 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:29.745711  356138 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:30.331119  356138 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:30.331918  356138 kubeadm.go:319] 
	I1123 10:16:30.332036  356138 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:30.332056  356138 kubeadm.go:319] 
	I1123 10:16:30.332201  356138 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:30.332221  356138 kubeadm.go:319] 
	I1123 10:16:30.332275  356138 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:30.332347  356138 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:30.332408  356138 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:30.332416  356138 kubeadm.go:319] 
	I1123 10:16:30.332478  356138 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:30.332486  356138 kubeadm.go:319] 
	I1123 10:16:30.332540  356138 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:30.332548  356138 kubeadm.go:319] 
	I1123 10:16:30.332612  356138 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:30.332708  356138 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:30.332818  356138 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:30.332837  356138 kubeadm.go:319] 
	I1123 10:16:30.332958  356138 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:30.333060  356138 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:30.333076  356138 kubeadm.go:319] 
	I1123 10:16:30.333211  356138 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333342  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:30.333366  356138 kubeadm.go:319] 	--control-plane 
	I1123 10:16:30.333375  356138 kubeadm.go:319] 
	I1123 10:16:30.333446  356138 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:30.333451  356138 kubeadm.go:319] 
	I1123 10:16:30.333535  356138 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333651  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:30.336224  356138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:30.336339  356138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:30.336389  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:30.336405  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:30.401160  356138 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:27.506801  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.507199  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.651902  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:32.152208  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:32.651044  344706 node_ready.go:49] node "old-k8s-version-990757" is "Ready"
	I1123 10:16:32.651072  344706 node_ready.go:38] duration metric: took 14.002600443s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:32.651103  344706 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:32.651154  344706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:32.664668  344706 api_server.go:72] duration metric: took 14.412040415s to wait for apiserver process to appear ...
	I1123 10:16:32.664699  344706 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:32.664734  344706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:16:32.671045  344706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:16:32.672175  344706 api_server.go:141] control plane version: v1.28.0
	I1123 10:16:32.672198  344706 api_server.go:131] duration metric: took 7.493612ms to wait for apiserver health ...
	I1123 10:16:32.672206  344706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:32.675396  344706 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:32.675423  344706 system_pods.go:61] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.675429  344706 system_pods.go:61] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.675438  344706 system_pods.go:61] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.675442  344706 system_pods.go:61] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.675446  344706 system_pods.go:61] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.675455  344706 system_pods.go:61] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.675461  344706 system_pods.go:61] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.675466  344706 system_pods.go:61] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.675474  344706 system_pods.go:74] duration metric: took 3.26216ms to wait for pod list to return data ...
	I1123 10:16:32.675483  344706 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:32.677500  344706 default_sa.go:45] found service account: "default"
	I1123 10:16:32.677517  344706 default_sa.go:55] duration metric: took 2.029784ms for default service account to be created ...
	I1123 10:16:32.677525  344706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:32.680674  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.680700  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.680707  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.680719  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.680730  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.680736  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.680745  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.680751  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.680760  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.680799  344706 retry.go:31] will retry after 291.35829ms: missing components: kube-dns
	I1123 10:16:32.977121  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.977154  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.977161  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.977168  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.977172  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.977176  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.977188  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.977195  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.977199  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.977215  344706 retry.go:31] will retry after 325.371921ms: missing components: kube-dns
	I1123 10:16:33.307183  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.307222  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:33.307228  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.307234  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.307237  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.307241  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.307244  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.307253  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.307257  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:33.307274  344706 retry.go:31] will retry after 477.295588ms: missing components: kube-dns
	W1123 10:16:29.428459  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:31.428879  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:33.429049  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:30.402276  356138 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:30.407016  356138 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:30.407034  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:30.424045  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:30.638241  356138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:30.638352  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:30.638388  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-412306 minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-412306 minikube.k8s.io/primary=true
	I1123 10:16:30.648402  356138 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:30.709488  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.210134  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.710498  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.209893  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.709530  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.209575  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.709563  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.210241  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.709746  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.210264  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.283600  356138 kubeadm.go:1114] duration metric: took 4.64531381s to wait for elevateKubeSystemPrivileges
	I1123 10:16:35.283643  356138 kubeadm.go:403] duration metric: took 15.49282887s to StartCluster
	I1123 10:16:35.283665  356138 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.283762  356138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:35.285869  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.286180  356138 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:35.286331  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:35.286610  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:35.286435  356138 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:35.286707  356138 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:16:35.286812  356138 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	I1123 10:16:35.286885  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.286746  356138 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:16:35.287011  356138 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:16:35.287600  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.287780  356138 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:35.288910  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.289524  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:35.314640  356138 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	I1123 10:16:35.314789  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.315364  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.316039  356138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:33.788957  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.788988  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Running
	I1123 10:16:33.788994  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.788997  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.789001  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.789006  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.789009  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.789013  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.789017  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Running
	I1123 10:16:33.789025  344706 system_pods.go:126] duration metric: took 1.111493702s to wait for k8s-apps to be running ...
	I1123 10:16:33.789036  344706 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:33.789083  344706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:33.801872  344706 system_svc.go:56] duration metric: took 12.824145ms WaitForService to wait for kubelet
	I1123 10:16:33.801901  344706 kubeadm.go:587] duration metric: took 15.549282124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:33.801917  344706 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:33.804486  344706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:33.804512  344706 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:33.804532  344706 node_conditions.go:105] duration metric: took 2.608231ms to run NodePressure ...
	I1123 10:16:33.804549  344706 start.go:242] waiting for startup goroutines ...
	I1123 10:16:33.804563  344706 start.go:247] waiting for cluster config update ...
	I1123 10:16:33.804579  344706 start.go:256] writing updated cluster config ...
	I1123 10:16:33.804859  344706 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:33.808438  344706 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:33.812221  344706 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.816745  344706 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:16:33.816770  344706 pod_ready.go:86] duration metric: took 4.52627ms for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.819363  344706 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.823014  344706 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.823034  344706 pod_ready.go:86] duration metric: took 3.64929ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.825305  344706 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.830141  344706 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.830162  344706 pod_ready.go:86] duration metric: took 4.841585ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.832571  344706 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.213051  344706 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:16:34.213110  344706 pod_ready.go:86] duration metric: took 380.4924ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.413069  344706 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.813198  344706 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:16:34.813228  344706 pod_ready.go:86] duration metric: took 400.102635ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.012747  344706 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412818  344706 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:16:35.412845  344706 pod_ready.go:86] duration metric: took 400.068338ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412857  344706 pod_ready.go:40] duration metric: took 1.604388715s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:35.469188  344706 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:16:35.510336  344706 out.go:203] 
	W1123 10:16:35.512291  344706 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:16:35.513439  344706 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:16:35.514923  344706 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	I1123 10:16:35.317954  356138 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.317987  356138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:35.318441  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.340962  356138 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.340989  356138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:35.341107  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.347702  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.369097  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.375674  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:35.442865  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:35.465653  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.487123  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.561205  356138 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:35.562463  356138 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:16:35.788632  356138 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:16:32.005830  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:34.006310  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:36.007382  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:35.430057  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:37.929223  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:35.789494  356138 addons.go:530] duration metric: took 503.064926ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:36.066022  356138 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-412306" context rescaled to 1 replicas
	W1123 10:16:37.565650  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:38.507551  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:41.006771  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:38.928775  344952 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:16:38.928809  344952 node_ready.go:38] duration metric: took 14.003414343s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:38.928827  344952 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:38.928893  344952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:38.941967  344952 api_server.go:72] duration metric: took 14.30774812s to wait for apiserver process to appear ...
	I1123 10:16:38.941992  344952 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:38.942007  344952 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:16:38.946871  344952 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:16:38.947779  344952 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:38.947803  344952 api_server.go:131] duration metric: took 5.806056ms to wait for apiserver health ...
	I1123 10:16:38.947811  344952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:38.951278  344952 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:38.951306  344952 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.951313  344952 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.951318  344952 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.951322  344952 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.951328  344952 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.951333  344952 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.951337  344952 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.951341  344952 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.951347  344952 system_pods.go:74] duration metric: took 3.530661ms to wait for pod list to return data ...
	I1123 10:16:38.951356  344952 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:38.953395  344952 default_sa.go:45] found service account: "default"
	I1123 10:16:38.953416  344952 default_sa.go:55] duration metric: took 2.05549ms for default service account to be created ...
	I1123 10:16:38.953424  344952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:38.955705  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:38.955729  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.955735  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.955743  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.955749  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.955755  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.955766  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.955774  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.955785  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.955807  344952 retry.go:31] will retry after 286.541435ms: missing components: kube-dns
	I1123 10:16:39.246793  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.246834  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:39.246842  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.246850  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.246855  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.246861  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.246866  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.246876  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.246889  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:39.246907  344952 retry.go:31] will retry after 342.610222ms: missing components: kube-dns
	I1123 10:16:39.594146  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.594183  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running
	I1123 10:16:39.594196  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.594200  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.594204  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.594210  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.594215  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.594220  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.594226  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running
	I1123 10:16:39.594236  344952 system_pods.go:126] duration metric: took 640.805319ms to wait for k8s-apps to be running ...
	I1123 10:16:39.594250  344952 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:39.594310  344952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:39.608983  344952 system_svc.go:56] duration metric: took 14.722696ms WaitForService to wait for kubelet
	I1123 10:16:39.609015  344952 kubeadm.go:587] duration metric: took 14.97480089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:39.609037  344952 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:39.611842  344952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:39.611865  344952 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:39.611882  344952 node_conditions.go:105] duration metric: took 2.839945ms to run NodePressure ...
	I1123 10:16:39.611895  344952 start.go:242] waiting for startup goroutines ...
	I1123 10:16:39.611908  344952 start.go:247] waiting for cluster config update ...
	I1123 10:16:39.611919  344952 start.go:256] writing updated cluster config ...
	I1123 10:16:39.612185  344952 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:39.616031  344952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:39.619510  344952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.623392  344952 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:16:39.623415  344952 pod_ready.go:86] duration metric: took 3.869312ms for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.625265  344952 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.628641  344952 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:16:39.628659  344952 pod_ready.go:86] duration metric: took 3.374871ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.630356  344952 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.633564  344952 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:16:39.633587  344952 pod_ready.go:86] duration metric: took 3.21019ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.635340  344952 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.020259  344952 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:16:40.020290  344952 pod_ready.go:86] duration metric: took 384.929039ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.220795  344952 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.620970  344952 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:16:40.621002  344952 pod_ready.go:86] duration metric: took 400.183007ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.819960  344952 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219866  344952 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:16:41.219893  344952 pod_ready.go:86] duration metric: took 399.908601ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219905  344952 pod_ready.go:40] duration metric: took 1.603850974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:41.264158  344952 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:41.265945  344952 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	I1123 10:16:42.506018  341630 pod_ready.go:94] pod "coredns-66bc5c9577-p6sw2" is "Ready"
	I1123 10:16:42.506054  341630 pod_ready.go:86] duration metric: took 31.004987147s for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.508459  341630 pod_ready.go:83] waiting for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.514192  341630 pod_ready.go:94] pod "etcd-bridge-791161" is "Ready"
	I1123 10:16:42.514218  341630 pod_ready.go:86] duration metric: took 5.738216ms for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.516115  341630 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.519705  341630 pod_ready.go:94] pod "kube-apiserver-bridge-791161" is "Ready"
	I1123 10:16:42.519724  341630 pod_ready.go:86] duration metric: took 3.591711ms for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.521450  341630 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.704830  341630 pod_ready.go:94] pod "kube-controller-manager-bridge-791161" is "Ready"
	I1123 10:16:42.704859  341630 pod_ready.go:86] duration metric: took 183.390224ms for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.905328  341630 pod_ready.go:83] waiting for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.304355  341630 pod_ready.go:94] pod "kube-proxy-sn6s2" is "Ready"
	I1123 10:16:43.304382  341630 pod_ready.go:86] duration metric: took 399.024239ms for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.504607  341630 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905001  341630 pod_ready.go:94] pod "kube-scheduler-bridge-791161" is "Ready"
	I1123 10:16:43.905030  341630 pod_ready.go:86] duration metric: took 400.39674ms for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905043  341630 pod_ready.go:40] duration metric: took 32.407876329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:43.960235  341630 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:43.961459  341630 out.go:179] * Done! kubectl is now configured to use "bridge-791161" cluster and "default" namespace by default
	W1123 10:16:40.065837  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:42.565358  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 10:16:32 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:32.613323718Z" level=info msg="Starting container: d2204092c1cde3040a8d19416f49ade37c8a74a1ee107be12e46254d0fe079a4" id=7a380de0-ca1a-4ca2-b789-ff3c81306fad name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:32 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:32.615584656Z" level=info msg="Started container" PID=2134 containerID=d2204092c1cde3040a8d19416f49ade37c8a74a1ee107be12e46254d0fe079a4 description=kube-system/coredns-5dd5756b68-fsbfv/coredns id=7a380de0-ca1a-4ca2-b789-ff3c81306fad name=/runtime.v1.RuntimeService/StartContainer sandboxID=8049905e4832022faf3b7a34c9d1a9cc17189db0e4f4ab62d4a8fe4daa443e41
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.000032099Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f9d0df85-9287-4074-b47b-e52b28d855e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.000136713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.005321741Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc48f79517c27c3e5a276c8500dabf774cd93056f724c7e937779e055d37f3d1 UID:f5410b61-89c3-4f61-ae72-922d00c885eb NetNS:/var/run/netns/c3db045a-0e13-42f4-bf4b-dc60ac47e7a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000316d10}] Aliases:map[]}"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.005348243Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.01490008Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc48f79517c27c3e5a276c8500dabf774cd93056f724c7e937779e055d37f3d1 UID:f5410b61-89c3-4f61-ae72-922d00c885eb NetNS:/var/run/netns/c3db045a-0e13-42f4-bf4b-dc60ac47e7a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000316d10}] Aliases:map[]}"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.015056514Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.015881804Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.017022693Z" level=info msg="Ran pod sandbox cc48f79517c27c3e5a276c8500dabf774cd93056f724c7e937779e055d37f3d1 with infra container: default/busybox/POD" id=f9d0df85-9287-4074-b47b-e52b28d855e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.018158814Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1719570-982a-4257-b20b-12c539244176 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.018276771Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c1719570-982a-4257-b20b-12c539244176 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.018310526Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c1719570-982a-4257-b20b-12c539244176 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.018882883Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d0986b64-e9ee-4f30-90a6-d146b9a29de6 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:36 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:36.020160264Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.978175904Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d0986b64-e9ee-4f30-90a6-d146b9a29de6 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.979186154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d3106483-49ea-41ac-a0cc-52f4a720375e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.980709303Z" level=info msg="Creating container: default/busybox/busybox" id=dd27491e-79f0-4916-9cb7-985bf6391255 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.980834525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.985764357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:37 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:37.986222518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:38 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:38.013295697Z" level=info msg="Created container 7bcecca9e302fe926e5a9686965ccf6e4577d5487e61f174d3339a5ce5217b10: default/busybox/busybox" id=dd27491e-79f0-4916-9cb7-985bf6391255 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:38 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:38.013898241Z" level=info msg="Starting container: 7bcecca9e302fe926e5a9686965ccf6e4577d5487e61f174d3339a5ce5217b10" id=6c2a4aa1-f600-41bb-961b-b9373d950a9f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:38 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:38.015634278Z" level=info msg="Started container" PID=2216 containerID=7bcecca9e302fe926e5a9686965ccf6e4577d5487e61f174d3339a5ce5217b10 description=default/busybox/busybox id=6c2a4aa1-f600-41bb-961b-b9373d950a9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc48f79517c27c3e5a276c8500dabf774cd93056f724c7e937779e055d37f3d1
	Nov 23 10:16:44 old-k8s-version-990757 crio[767]: time="2025-11-23T10:16:44.789080741Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	7bcecca9e302f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   cc48f79517c27       busybox                                          default
	d2204092c1cde       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   8049905e48320       coredns-5dd5756b68-fsbfv                         kube-system
	cc0b716213770       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   6842f8d59e973       storage-provisioner                              kube-system
	3ad5374565a2d       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   43bb8228b95ee       kindnet-nz2m9                                    kube-system
	a4c19226e95ce       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   bc087ee763b67       kube-proxy-99g4b                                 kube-system
	634d1b72de06d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   b0690d1422c9a       kube-apiserver-old-k8s-version-990757            kube-system
	50f8093f9d54b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   4d8a989fdb088       etcd-old-k8s-version-990757                      kube-system
	37f1ed6d9bd4c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   266b2e4bd9f27       kube-controller-manager-old-k8s-version-990757   kube-system
	ef9fd74daf09a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   0cc0883a7f8f4       kube-scheduler-old-k8s-version-990757            kube-system
	
	
	==> coredns [d2204092c1cde3040a8d19416f49ade37c8a74a1ee107be12e46254d0fe079a4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49896 - 43174 "HINFO IN 8688597057188700433.4522270089608222483. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.133053418s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-990757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-990757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-990757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-990757
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:16:36 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:16:36 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:16:36 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:16:36 +0000   Sun, 23 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-990757
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                63027792-4520-472e-b216-dd92789c4530
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-fsbfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-990757                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-nz2m9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-990757             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-990757    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-99g4b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-990757             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-990757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-990757 event: Registered Node old-k8s-version-990757 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-990757 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +4.031511] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[  +8.255356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [50f8093f9d54b3777c496da60a9bd7fd23114d41c550d1a13f2ca86fda8e7de4] <==
	{"level":"info","ts":"2025-11-23T10:15:59.552346Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:16:00.527497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T10:16:00.527549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T10:16:00.52757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-23T10:16:00.527586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:16:00.527594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:16:00.527607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T10:16:00.527618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:16:00.528606Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-990757 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:16:00.528653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:16:00.529673Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:16:00.529955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:16:00.530032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:16:00.531268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:16:00.53175Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:16:00.531878Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:16:00.53685Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:16:00.53449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:16:00.536982Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:16:12.838167Z","caller":"traceutil/trace.go:171","msg":"trace[1928198313] transaction","detail":"{read_only:false; response_revision:254; number_of_response:1; }","duration":"136.373648ms","start":"2025-11-23T10:16:12.701743Z","end":"2025-11-23T10:16:12.838117Z","steps":["trace[1928198313] 'process raft request'  (duration: 136.205439ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:16:13.47366Z","caller":"traceutil/trace.go:171","msg":"trace[238851492] transaction","detail":"{read_only:false; response_revision:255; number_of_response:1; }","duration":"215.150829ms","start":"2025-11-23T10:16:13.258485Z","end":"2025-11-23T10:16:13.473636Z","steps":["trace[238851492] 'process raft request'  (duration: 215.009437ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:16:14.293177Z","caller":"traceutil/trace.go:171","msg":"trace[593504410] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"169.77091ms","start":"2025-11-23T10:16:14.123384Z","end":"2025-11-23T10:16:14.293155Z","steps":["trace[593504410] 'process raft request'  (duration: 169.508596ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:16:14.458788Z","caller":"traceutil/trace.go:171","msg":"trace[1520582989] transaction","detail":"{read_only:false; response_revision:261; number_of_response:1; }","duration":"151.184727ms","start":"2025-11-23T10:16:14.307577Z","end":"2025-11-23T10:16:14.458761Z","steps":["trace[1520582989] 'process raft request'  (duration: 99.335276ms)","trace[1520582989] 'compare'  (duration: 51.712562ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:16:14.73233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.645563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837711554685 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T10:16:14.732441Z","caller":"traceutil/trace.go:171","msg":"trace[1607350862] transaction","detail":"{read_only:false; response_revision:263; number_of_response:1; }","duration":"257.486804ms","start":"2025-11-23T10:16:14.474925Z","end":"2025-11-23T10:16:14.732412Z","steps":["trace[1607350862] 'process raft request'  (duration: 141.298241ms)","trace[1607350862] 'compare'  (duration: 115.560486ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:16:46 up  2:59,  0 user,  load average: 8.18, 5.43, 2.91
	Linux old-k8s-version-990757 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ad5374565a2daeb95c8212513a62c5a68ae8c8d3fba4641af39776107c96d0d] <==
	I1123 10:16:21.746700       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:16:21.746948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:16:21.747082       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:16:21.747111       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:16:21.747131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:16:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:16:22.043708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:16:22.043783       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:16:22.043795       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:16:22.044329       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:16:22.344761       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:16:22.344782       1 metrics.go:72] Registering metrics
	I1123 10:16:22.344841       1 controller.go:711] "Syncing nftables rules"
	I1123 10:16:32.052519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:16:32.052580       1 main.go:301] handling current node
	I1123 10:16:42.045561       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:16:42.045600       1 main.go:301] handling current node
	
	
	==> kube-apiserver [634d1b72de06d4e0158ad80ff013678cf355d5912884136c88815c24d27133d3] <==
	I1123 10:16:01.977382       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 10:16:01.977405       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:16:01.977414       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:16:01.977421       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:16:01.977428       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:16:01.977573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:16:01.979149       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:16:01.980389       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:16:01.992117       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:16:02.020330       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 10:16:02.882389       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:16:02.886610       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:16:02.886633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:16:03.496894       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:16:03.542127       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:16:03.696260       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:16:03.705357       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:16:03.706673       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 10:16:03.711403       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:16:03.946232       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:16:05.129247       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:16:05.145226       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:16:05.158104       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 10:16:18.688067       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:16:18.736307       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [37f1ed6d9bd4cabadcfc5dcc67f8f2ab3fd6050a1fa529dc77cda0a303358952] <==
	I1123 10:16:18.035621       1 shared_informer.go:318] Caches are synced for deployment
	I1123 10:16:18.083377       1 shared_informer.go:318] Caches are synced for crt configmap
	I1123 10:16:18.089844       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:16:18.093618       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:16:18.412635       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:16:18.481836       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:16:18.481868       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:16:18.694892       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 10:16:18.718262       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 10:16:18.743767       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-99g4b"
	I1123 10:16:18.746310       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nz2m9"
	I1123 10:16:18.889620       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bqrxg"
	I1123 10:16:18.898355       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fsbfv"
	I1123 10:16:18.922483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="227.764007ms"
	I1123 10:16:18.940058       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bqrxg"
	I1123 10:16:18.964977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.314368ms"
	I1123 10:16:18.982387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.242163ms"
	I1123 10:16:18.982491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.9µs"
	I1123 10:16:18.982537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.658µs"
	I1123 10:16:32.267481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="463.008µs"
	I1123 10:16:32.278061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.07µs"
	I1123 10:16:32.965388       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 10:16:33.312741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.566µs"
	I1123 10:16:33.329815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.95916ms"
	I1123 10:16:33.329940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.901µs"
	
	
	==> kube-proxy [a4c19226e95cea3bacd4ee18b01836b4e3ec572a694b5caa4cf6563c02362357] <==
	I1123 10:16:19.136289       1 server_others.go:69] "Using iptables proxy"
	I1123 10:16:19.145708       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:16:19.165915       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:16:19.168458       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:16:19.168500       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:16:19.168510       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:16:19.168552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:16:19.168805       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:16:19.168820       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:16:19.169473       1 config.go:315] "Starting node config controller"
	I1123 10:16:19.169534       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:16:19.169703       1 config.go:188] "Starting service config controller"
	I1123 10:16:19.169741       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:16:19.169763       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:16:19.169793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:16:19.270194       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 10:16:19.270389       1 shared_informer.go:318] Caches are synced for node config
	I1123 10:16:19.270418       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ef9fd74daf09a9bb0053a05cfa26433f1963cd98cda4731375a53a82421439cf] <==
	W1123 10:16:02.752580       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 10:16:02.752622       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:16:02.774932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:16:02.774981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:16:02.828158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:16:02.828207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 10:16:02.867835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 10:16:02.867886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 10:16:02.909762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:16:02.909805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 10:16:03.056899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 10:16:03.056939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 10:16:03.076800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 10:16:03.076845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 10:16:03.088585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 10:16:03.088640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 10:16:03.113003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 10:16:03.113674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 10:16:03.186220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 10:16:03.186268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 10:16:03.201074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 10:16:03.201217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 10:16:03.219018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 10:16:03.219058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1123 10:16:04.439865       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.001941    1382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.750533    1382 topology_manager.go:215] "Topology Admit Handler" podUID="d727ffbe-b078-4abf-a715-fc9811920e00" podNamespace="kube-system" podName="kube-proxy-99g4b"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.753889    1382 topology_manager.go:215] "Topology Admit Handler" podUID="2de3e7ea-96dc-4120-8500-245759aaacda" podNamespace="kube-system" podName="kindnet-nz2m9"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789315    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de3e7ea-96dc-4120-8500-245759aaacda-xtables-lock\") pod \"kindnet-nz2m9\" (UID: \"2de3e7ea-96dc-4120-8500-245759aaacda\") " pod="kube-system/kindnet-nz2m9"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789514    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de3e7ea-96dc-4120-8500-245759aaacda-lib-modules\") pod \"kindnet-nz2m9\" (UID: \"2de3e7ea-96dc-4120-8500-245759aaacda\") " pod="kube-system/kindnet-nz2m9"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789556    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc25b\" (UniqueName: \"kubernetes.io/projected/d727ffbe-b078-4abf-a715-fc9811920e00-kube-api-access-xc25b\") pod \"kube-proxy-99g4b\" (UID: \"d727ffbe-b078-4abf-a715-fc9811920e00\") " pod="kube-system/kube-proxy-99g4b"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789586    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d727ffbe-b078-4abf-a715-fc9811920e00-xtables-lock\") pod \"kube-proxy-99g4b\" (UID: \"d727ffbe-b078-4abf-a715-fc9811920e00\") " pod="kube-system/kube-proxy-99g4b"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789615    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2de3e7ea-96dc-4120-8500-245759aaacda-cni-cfg\") pod \"kindnet-nz2m9\" (UID: \"2de3e7ea-96dc-4120-8500-245759aaacda\") " pod="kube-system/kindnet-nz2m9"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789649    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbqz\" (UniqueName: \"kubernetes.io/projected/2de3e7ea-96dc-4120-8500-245759aaacda-kube-api-access-jhbqz\") pod \"kindnet-nz2m9\" (UID: \"2de3e7ea-96dc-4120-8500-245759aaacda\") " pod="kube-system/kindnet-nz2m9"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789679    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d727ffbe-b078-4abf-a715-fc9811920e00-kube-proxy\") pod \"kube-proxy-99g4b\" (UID: \"d727ffbe-b078-4abf-a715-fc9811920e00\") " pod="kube-system/kube-proxy-99g4b"
	Nov 23 10:16:18 old-k8s-version-990757 kubelet[1382]: I1123 10:16:18.789755    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d727ffbe-b078-4abf-a715-fc9811920e00-lib-modules\") pod \"kube-proxy-99g4b\" (UID: \"d727ffbe-b078-4abf-a715-fc9811920e00\") " pod="kube-system/kube-proxy-99g4b"
	Nov 23 10:16:22 old-k8s-version-990757 kubelet[1382]: I1123 10:16:22.289356    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-99g4b" podStartSLOduration=4.289316534 podCreationTimestamp="2025-11-23 10:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:19.283251153 +0000 UTC m=+14.192839740" watchObservedRunningTime="2025-11-23 10:16:22.289316534 +0000 UTC m=+17.198905120"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.241800    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.266844    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-nz2m9" podStartSLOduration=11.757163738 podCreationTimestamp="2025-11-23 10:16:18 +0000 UTC" firstStartedPulling="2025-11-23 10:16:19.062946397 +0000 UTC m=+13.972534977" lastFinishedPulling="2025-11-23 10:16:21.572564895 +0000 UTC m=+16.482153477" observedRunningTime="2025-11-23 10:16:22.289422565 +0000 UTC m=+17.199011152" watchObservedRunningTime="2025-11-23 10:16:32.266782238 +0000 UTC m=+27.176370849"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.267130    1382 topology_manager.go:215] "Topology Admit Handler" podUID="d381637c-3686-4e19-95eb-489a0328363d" podNamespace="kube-system" podName="coredns-5dd5756b68-fsbfv"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.267369    1382 topology_manager.go:215] "Topology Admit Handler" podUID="b9036b3a-e19e-439b-9584-93d805cb21ea" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.282958    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jrd\" (UniqueName: \"kubernetes.io/projected/b9036b3a-e19e-439b-9584-93d805cb21ea-kube-api-access-t5jrd\") pod \"storage-provisioner\" (UID: \"b9036b3a-e19e-439b-9584-93d805cb21ea\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.283018    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9036b3a-e19e-439b-9584-93d805cb21ea-tmp\") pod \"storage-provisioner\" (UID: \"b9036b3a-e19e-439b-9584-93d805cb21ea\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.283122    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24l8\" (UniqueName: \"kubernetes.io/projected/d381637c-3686-4e19-95eb-489a0328363d-kube-api-access-q24l8\") pod \"coredns-5dd5756b68-fsbfv\" (UID: \"d381637c-3686-4e19-95eb-489a0328363d\") " pod="kube-system/coredns-5dd5756b68-fsbfv"
	Nov 23 10:16:32 old-k8s-version-990757 kubelet[1382]: I1123 10:16:32.283162    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d381637c-3686-4e19-95eb-489a0328363d-config-volume\") pod \"coredns-5dd5756b68-fsbfv\" (UID: \"d381637c-3686-4e19-95eb-489a0328363d\") " pod="kube-system/coredns-5dd5756b68-fsbfv"
	Nov 23 10:16:33 old-k8s-version-990757 kubelet[1382]: I1123 10:16:33.312549    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fsbfv" podStartSLOduration=15.312498636 podCreationTimestamp="2025-11-23 10:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:33.312140658 +0000 UTC m=+28.221729245" watchObservedRunningTime="2025-11-23 10:16:33.312498636 +0000 UTC m=+28.222087224"
	Nov 23 10:16:35 old-k8s-version-990757 kubelet[1382]: I1123 10:16:35.698066    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.69800425 podCreationTimestamp="2025-11-23 10:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:33.331679231 +0000 UTC m=+28.241267822" watchObservedRunningTime="2025-11-23 10:16:35.69800425 +0000 UTC m=+30.607593185"
	Nov 23 10:16:35 old-k8s-version-990757 kubelet[1382]: I1123 10:16:35.698456    1382 topology_manager.go:215] "Topology Admit Handler" podUID="f5410b61-89c3-4f61-ae72-922d00c885eb" podNamespace="default" podName="busybox"
	Nov 23 10:16:35 old-k8s-version-990757 kubelet[1382]: I1123 10:16:35.804304    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zxn\" (UniqueName: \"kubernetes.io/projected/f5410b61-89c3-4f61-ae72-922d00c885eb-kube-api-access-w8zxn\") pod \"busybox\" (UID: \"f5410b61-89c3-4f61-ae72-922d00c885eb\") " pod="default/busybox"
	Nov 23 10:16:38 old-k8s-version-990757 kubelet[1382]: I1123 10:16:38.325569    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.365416039 podCreationTimestamp="2025-11-23 10:16:35 +0000 UTC" firstStartedPulling="2025-11-23 10:16:36.018500439 +0000 UTC m=+30.928089018" lastFinishedPulling="2025-11-23 10:16:37.978610819 +0000 UTC m=+32.888199399" observedRunningTime="2025-11-23 10:16:38.325230549 +0000 UTC m=+33.234819140" watchObservedRunningTime="2025-11-23 10:16:38.32552642 +0000 UTC m=+33.235115007"
	
	
	==> storage-provisioner [cc0b716213770c0ca1efbd85774024d3885cf995d4282f993e2bcb9d39d8c5be] <==
	I1123 10:16:32.625797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:16:32.635221       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:16:32.635269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:16:32.641598       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:16:32.641691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35efb046-0c13-4b37-bd0a-2155a92525f0", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-990757_7bccc9a6-a66c-460b-866d-404dc71af29a became leader
	I1123 10:16:32.641794       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_7bccc9a6-a66c-460b-866d-404dc71af29a!
	I1123 10:16:32.742210       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_7bccc9a6-a66c-460b-866d-404dc71af29a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-990757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-541522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-541522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.58199ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-541522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-541522 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-541522 describe deploy/metrics-server -n kube-system: exit status 1 (55.632145ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-541522 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-541522
helpers_test.go:243: (dbg) docker inspect no-preload-541522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	        "Created": "2025-11-23T10:15:44.853738209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:15:45.007704875Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hosts",
	        "LogPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba-json.log",
	        "Name": "/no-preload-541522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-541522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-541522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	                "LowerDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-541522",
	                "Source": "/var/lib/docker/volumes/no-preload-541522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-541522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-541522",
	                "name.minikube.sigs.k8s.io": "no-preload-541522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ff2638fe2caaec9e5c42f449ff251d801147e3580ee6753b2dbe828e40b4bab6",
	            "SandboxKey": "/var/run/docker/netns/ff2638fe2caa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-541522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0caff4f103e2bb50c273486830a8e865b14f6dbe8e146654adb86f6d80472821",
	                    "EndpointID": "23eb666bb1a04629ba07eea8b4c79f51df6b6f1c8e6ec46ecd0dec3862561587",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "56:66:80:c3:dd:4d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-541522",
	                        "e6eb78d2b6b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-541522 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-541522 logs -n 25: (1.110273295s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-791161 sudo cat /var/lib/kubelet/config.yaml                                                                                                │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status docker --all --full --no-pager                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo docker system info                                                                                                              │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cri-dockerd --version                                                                                                           │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo containerd config dump                                                                                                          │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo crio config                                                                                                                     │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p flannel-791161                                                                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-412306     │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p bridge-791161 pgrep -a kubelet                                                                                                                      │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-990757 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ stop    │ -p old-k8s-version-990757 --alsologtostderr -v=3                                                                                                       │ old-k8s-version-990757 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-541522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-541522      │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:09.384488  356138 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:09.384651  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384664  356138 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:09.384670  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384941  356138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:16:09.385666  356138 out.go:368] Setting JSON to false
	I1123 10:16:09.387494  356138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10710,"bootTime":1763882259,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:16:09.387583  356138 start.go:143] virtualization: kvm guest
	I1123 10:16:09.389675  356138 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:16:09.391215  356138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:16:09.391256  356138 notify.go:221] Checking for updates...
	I1123 10:16:09.393259  356138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:09.394603  356138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:09.395803  356138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:16:09.397054  356138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:16:09.398810  356138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:16:09.400667  356138 config.go:182] Loaded profile config "bridge-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400825  356138 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400980  356138 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:09.401117  356138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:09.431550  356138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:16:09.431721  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.501610  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.486961066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.501769  356138 docker.go:319] overlay module found
	I1123 10:16:09.503502  356138 out.go:179] * Using the docker driver based on user configuration
	I1123 10:16:08.932406  341630 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:08.932428  341630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:08.932485  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.962254  341630 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:08.962286  341630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:08.962357  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.969489  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:08.986744  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:09.003812  341630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:09.056864  341630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:09.090911  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:09.108517  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:09.226531  341630 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:09.228833  341630 node_ready.go:35] waiting up to 15m0s for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245324  341630 node_ready.go:49] node "bridge-791161" is "Ready"
	I1123 10:16:09.245361  341630 node_ready.go:38] duration metric: took 16.394308ms for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245379  341630 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:09.245433  341630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:09.502654  341630 api_server.go:72] duration metric: took 602.591604ms to wait for apiserver process to appear ...
	I1123 10:16:09.502681  341630 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:09.502706  341630 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 10:16:09.509263  341630 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 10:16:09.510155  341630 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:09.504848  356138 start.go:309] selected driver: docker
	I1123 10:16:09.504864  356138 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:09.504878  356138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:16:09.505666  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.570314  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.560155745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.570532  356138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:09.570826  356138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:09.572359  356138 out.go:179] * Using Docker driver with root privileges
	I1123 10:16:09.573651  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:09.573735  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:09.573748  356138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:09.573829  356138 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:09.575056  356138 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:16:09.576077  356138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:09.577197  356138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:09.578314  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:09.578350  356138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:16:09.578363  356138 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:09.578405  356138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:09.578475  356138 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:16:09.578490  356138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:16:09.578607  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:09.578632  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json: {Name:mk1fd6c8c1b8c2c18e5b4ea57dc46155bd997340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:09.603731  356138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:16:09.603757  356138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:16:09.603773  356138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:16:09.603816  356138 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:16:09.603920  356138 start.go:364] duration metric: took 78.804µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:16:09.603953  356138 start.go:93] Provisioning new machine with config: &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:09.604048  356138 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:16:09.510617  341630 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:09.510639  341630 api_server.go:131] duration metric: took 7.9515ms to wait for apiserver health ...
	I1123 10:16:09.510646  341630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:09.511774  341630 addons.go:530] duration metric: took 611.647616ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:09.513306  341630 system_pods.go:59] 6 kube-system pods found
	I1123 10:16:09.513342  341630 system_pods.go:61] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.513353  341630 system_pods.go:61] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.513367  341630 system_pods.go:61] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.513379  341630 system_pods.go:61] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.513388  341630 system_pods.go:61] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.513400  341630 system_pods.go:61] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.513408  341630 system_pods.go:74] duration metric: took 2.755326ms to wait for pod list to return data ...
	I1123 10:16:09.513421  341630 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:09.515529  341630 default_sa.go:45] found service account: "default"
	I1123 10:16:09.515550  341630 default_sa.go:55] duration metric: took 2.122813ms for default service account to be created ...
	I1123 10:16:09.515559  341630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:09.517664  341630 system_pods.go:86] 6 kube-system pods found
	I1123 10:16:09.517695  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.517709  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.517719  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.517731  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.517738  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.517746  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.517783  341630 retry.go:31] will retry after 269.045888ms: missing components: kube-dns, kube-proxy
	I1123 10:16:09.732517  341630 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-791161" context rescaled to 1 replicas
	I1123 10:16:09.792357  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:09.792401  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792413  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792424  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.792436  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.792446  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.792463  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.792475  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.792483  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.792509  341630 retry.go:31] will retry after 270.754186ms: missing components: kube-dns, kube-proxy
	I1123 10:16:10.068331  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.068370  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068381  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068391  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.068400  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.068409  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.068430  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.068443  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.068450  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:10.068477  341630 retry.go:31] will retry after 429.754148ms: missing components: kube-dns
	I1123 10:16:10.503386  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.503419  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503426  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503433  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.503438  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.503444  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.503448  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.503451  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.503454  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.503470  341630 retry.go:31] will retry after 408.73206ms: missing components: kube-dns
	I1123 10:16:10.917355  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.917398  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917410  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917420  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.917429  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.917451  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.917465  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.917474  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.917478  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.917500  341630 retry.go:31] will retry after 552.289133ms: missing components: kube-dns
	I1123 10:16:09.278883  344952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:09.372128  344952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:09.619893  344952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:10.283551  344952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:10.867997  344952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:10.868330  344952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:10.989337  344952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:10.989485  344952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:11.169439  344952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:11.400232  344952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:11.647348  344952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:11.647533  344952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:11.771440  344952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:12.267757  344952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:12.654977  344952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:12.947814  344952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:13.078046  344952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:13.078626  344952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:13.136374  344952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:08.666124  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.166689  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.666832  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.166752  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.666681  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.165984  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.166196  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.666342  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.166030  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.195964  344952 out.go:252]   - Booting up control plane ...
	I1123 10:16:13.196155  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:13.196274  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:13.196362  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:13.196492  344952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:13.196611  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:13.196738  344952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:13.197029  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:13.197260  344952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:13.266865  344952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:13.267069  344952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:09.606473  356138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:16:09.606832  356138 start.go:159] libmachine.API.Create for "embed-certs-412306" (driver="docker")
	I1123 10:16:09.606885  356138 client.go:173] LocalClient.Create starting
	I1123 10:16:09.607022  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:16:09.607067  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607113  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607181  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:16:09.607208  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607233  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607683  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:16:09.629449  356138 cli_runner.go:211] docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:16:09.629532  356138 network_create.go:284] running [docker network inspect embed-certs-412306] to gather additional debugging logs...
	I1123 10:16:09.629558  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306
	W1123 10:16:09.649505  356138 cli_runner.go:211] docker network inspect embed-certs-412306 returned with exit code 1
	I1123 10:16:09.649534  356138 network_create.go:287] error running [docker network inspect embed-certs-412306]: docker network inspect embed-certs-412306: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-412306 not found
	I1123 10:16:09.649551  356138 network_create.go:289] output of [docker network inspect embed-certs-412306]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-412306 not found
	
	** /stderr **
	I1123 10:16:09.649693  356138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:09.668995  356138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:16:09.669799  356138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:16:09.670740  356138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:16:09.671473  356138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-052388d40ecf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:97:1c:bc:d1:b9} reservation:<nil>}
	I1123 10:16:09.672185  356138 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0caff4f103e2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:ae:32:4b:cf:65} reservation:<nil>}
	I1123 10:16:09.676786  356138 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d5fec0}
	I1123 10:16:09.676832  356138 network_create.go:124] attempt to create docker network embed-certs-412306 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 10:16:09.676908  356138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-412306 embed-certs-412306
	I1123 10:16:09.737193  356138 network_create.go:108] docker network embed-certs-412306 192.168.94.0/24 created
	I1123 10:16:09.737241  356138 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-412306" container
	I1123 10:16:09.737307  356138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:16:09.758160  356138 cli_runner.go:164] Run: docker volume create embed-certs-412306 --label name.minikube.sigs.k8s.io=embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:16:09.779650  356138 oci.go:103] Successfully created a docker volume embed-certs-412306
	I1123 10:16:09.779742  356138 cli_runner.go:164] Run: docker run --rm --name embed-certs-412306-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --entrypoint /usr/bin/test -v embed-certs-412306:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:16:10.255390  356138 oci.go:107] Successfully prepared a docker volume embed-certs-412306
	I1123 10:16:10.255455  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:10.255469  356138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:16:10.255530  356138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:16:11.474871  341630 system_pods.go:86] 7 kube-system pods found
	I1123 10:16:11.474914  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:11.474924  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running
	I1123 10:16:11.474945  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:11.474955  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:11.474961  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:11.474968  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:11.474973  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:11.474984  341630 system_pods.go:126] duration metric: took 1.959418216s to wait for k8s-apps to be running ...
	I1123 10:16:11.474994  341630 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:11.475054  341630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:11.489403  341630 system_svc.go:56] duration metric: took 14.399252ms WaitForService to wait for kubelet
	I1123 10:16:11.489444  341630 kubeadm.go:587] duration metric: took 2.58938325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:11.489470  341630 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:11.492755  341630 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:11.492782  341630 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:11.492808  341630 node_conditions.go:105] duration metric: took 3.332237ms to run NodePressure ...
	I1123 10:16:11.492820  341630 start.go:242] waiting for startup goroutines ...
	I1123 10:16:11.492829  341630 start.go:247] waiting for cluster config update ...
	I1123 10:16:11.492840  341630 start.go:256] writing updated cluster config ...
	I1123 10:16:11.493117  341630 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:11.497127  341630 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:11.501040  341630 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:16:13.507081  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:15.507577  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:13.666736  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.166653  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.666411  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.166345  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.665938  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.166765  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.166588  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.665914  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.166076  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.250824  344706 kubeadm.go:1114] duration metric: took 12.162789359s to wait for elevateKubeSystemPrivileges
	I1123 10:16:18.250873  344706 kubeadm.go:403] duration metric: took 24.23117455s to StartCluster
	I1123 10:16:18.250896  344706 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.250984  344706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:18.252313  344706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.252591  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:18.252586  344706 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:18.252625  344706 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:18.252726  344706 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252748  344706 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-990757"
	I1123 10:16:18.252763  344706 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252783  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.252788  344706 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-990757"
	I1123 10:16:18.252794  344706 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:18.253185  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.253439  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.256225  344706 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:18.257663  344706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.278672  344706 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-990757"
	I1123 10:16:18.278725  344706 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:14.780767  356138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.525179702s)
	I1123 10:16:14.780809  356138 kic.go:203] duration metric: took 4.525336925s to extract preloaded images to volume ...
	W1123 10:16:14.780917  356138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:16:14.780972  356138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:16:14.781025  356138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:16:14.851187  356138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-412306 --name embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-412306 --network embed-certs-412306 --ip 192.168.94.2 --volume embed-certs-412306:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:16:15.210434  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Running}}
	I1123 10:16:15.236308  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.262410  356138 cli_runner.go:164] Run: docker exec embed-certs-412306 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:16:15.312245  356138 oci.go:144] the created container "embed-certs-412306" has a running status.
	I1123 10:16:15.312287  356138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa...
	I1123 10:16:15.508167  356138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:16:15.538609  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.568324  356138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:16:15.568357  356138 kic_runner.go:114] Args: [docker exec --privileged embed-certs-412306 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:16:15.633555  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.657069  356138 machine.go:94] provisionDockerMachine start ...
	I1123 10:16:15.657228  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.682778  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.683182  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.683211  356138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:16:15.834361  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:15.834394  356138 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:16:15.834460  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.855149  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.855386  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.855408  356138 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:16:16.024669  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:16.024755  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.048672  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.048986  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.049013  356138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:16:16.203231  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:16:16.203261  356138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:16:16.203307  356138 ubuntu.go:190] setting up certificates
	I1123 10:16:16.203329  356138 provision.go:84] configureAuth start
	I1123 10:16:16.203397  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.224391  356138 provision.go:143] copyHostCerts
	I1123 10:16:16.224466  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:16:16.224486  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:16:16.224568  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:16:16.224688  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:16:16.224702  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:16:16.224741  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:16:16.224838  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:16:16.224850  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:16:16.224885  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:16:16.224961  356138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:16:16.252659  356138 provision.go:177] copyRemoteCerts
	I1123 10:16:16.252799  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:16:16.252862  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.274900  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.381909  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:16:16.403354  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:16:16.421969  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:16:16.443591  356138 provision.go:87] duration metric: took 240.241648ms to configureAuth
	I1123 10:16:16.443629  356138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:16:16.443817  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:16.443936  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.464697  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.465000  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.465026  356138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:16:16.768631  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:16:16.768659  356138 machine.go:97] duration metric: took 1.11155421s to provisionDockerMachine
	I1123 10:16:16.768671  356138 client.go:176] duration metric: took 7.161774198s to LocalClient.Create
	I1123 10:16:16.768695  356138 start.go:167] duration metric: took 7.161866501s to libmachine.API.Create "embed-certs-412306"
	I1123 10:16:16.768705  356138 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:16:16.768716  356138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:16:16.768980  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:16:16.769049  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.800429  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.927787  356138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:16:16.931545  356138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:16:16.931591  356138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:16:16.931614  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:16:16.931671  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:16:16.931739  356138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:16:16.931823  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:16:16.939473  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:16.959179  356138 start.go:296] duration metric: took 190.46241ms for postStartSetup
	I1123 10:16:16.959501  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.984276  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:16.984618  356138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:16:16.984693  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.006779  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.112458  356138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:16:17.117106  356138 start.go:128] duration metric: took 7.513028342s to createHost
	I1123 10:16:17.117133  356138 start.go:83] releasing machines lock for "embed-certs-412306", held for 7.513197957s
	I1123 10:16:17.117208  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:17.134501  356138 ssh_runner.go:195] Run: cat /version.json
	I1123 10:16:17.134547  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.134586  356138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:16:17.134662  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.153344  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.153649  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.310865  356138 ssh_runner.go:195] Run: systemctl --version
	I1123 10:16:17.317393  356138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:16:17.352355  356138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:16:17.357116  356138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:16:17.357180  356138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:16:17.382356  356138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:16:17.382379  356138 start.go:496] detecting cgroup driver to use...
	I1123 10:16:17.382409  356138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:16:17.382462  356138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:16:17.398562  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:16:17.411069  356138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:16:17.411138  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:16:17.427203  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:16:17.444861  356138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:16:17.530800  356138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:16:17.622946  356138 docker.go:234] disabling docker service ...
	I1123 10:16:17.623025  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:16:17.641931  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:16:17.654457  356138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:16:17.747652  356138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:16:17.845810  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:16:17.858620  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:16:17.875812  356138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:16:17.875880  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.888305  356138 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:16:17.888379  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.899801  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.911635  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.923072  356138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:16:17.932765  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.945022  356138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.962784  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.974698  356138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:16:17.984798  356138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:16:17.994564  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.110636  356138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:16:18.290560  356138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:16:18.290681  356138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:16:18.297099  356138 start.go:564] Will wait 60s for crictl version
	I1123 10:16:18.297225  356138 ssh_runner.go:195] Run: which crictl
	I1123 10:16:18.304375  356138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:16:18.348465  356138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:16:18.348551  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.389627  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.430444  356138 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:16:18.278756  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.279376  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.279793  344706 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.279857  344706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:18.280007  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.306787  344706 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:18.306810  344706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:18.306871  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.316758  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.336999  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.367903  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:18.433504  344706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.466536  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.470919  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:14.268571  344952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001808065s
	I1123 10:16:14.273043  344952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:14.273189  344952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:16:14.273313  344952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:14.273420  344952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:16.059724  344952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.786566479s
	I1123 10:16:16.921595  344952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.648519148s
	I1123 10:16:18.777367  344952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.504051541s
	I1123 10:16:18.794664  344952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:18.805590  344952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:18.816203  344952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:18.816513  344952 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-541522 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:18.824772  344952 kubeadm.go:319] [bootstrap-token] Using token: mhptlw.q9ng0jhdmffx1zol
	I1123 10:16:18.826026  344952 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:18.826262  344952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:18.830334  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:18.838855  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:18.843285  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:18.845986  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:18.848662  344952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:18.647290  344706 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:18.648399  344706 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:18.933557  344706 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:18.431580  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:18.458210  356138 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:16:18.464771  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.479461  356138 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:16:18.479617  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:18.479685  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.535015  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.535043  356138 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:16:18.535112  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.576193  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.576222  356138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:16:18.576333  356138 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:16:18.576476  356138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:16:18.576564  356138 ssh_runner.go:195] Run: crio config
	I1123 10:16:18.633738  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:18.633768  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:18.633790  356138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:16:18.633824  356138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:16:18.633989  356138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:16:18.634064  356138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:16:18.647059  356138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:16:18.647172  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:16:18.658381  356138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:16:18.675184  356138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:16:18.696460  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:16:18.712392  356138 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:16:18.717832  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.731391  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.841960  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.878215  356138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:16:18.878238  356138 certs.go:195] generating shared ca certs ...
	I1123 10:16:18.878258  356138 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.878425  356138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:16:18.878475  356138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:16:18.878488  356138 certs.go:257] generating profile certs ...
	I1123 10:16:18.878556  356138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:16:18.878580  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt with IP's: []
	I1123 10:16:19.147317  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt ...
	I1123 10:16:19.147348  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt: {Name:mkbf59c08f4785d244500114d39649c207c90bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147525  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key ...
	I1123 10:16:19.147545  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key: {Name:mkb75245d2cacd41a4a207ee2cc5a25d4ea8629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147671  356138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:16:19.147694  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1123 10:16:19.174958  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 ...
	I1123 10:16:19.174991  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37: {Name:mk680cab74fc85275258d54871c4d313a4cfa6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175171  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 ...
	I1123 10:16:19.175191  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37: {Name:mk076b1fd9788864d5fa8bfdccf76cb7bad2f09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175299  356138 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt
	I1123 10:16:19.175403  356138 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key
	I1123 10:16:19.175476  356138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:16:19.175494  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt with IP's: []
	I1123 10:16:19.340924  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt ...
	I1123 10:16:19.340952  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt: {Name:mkd487bb2ca9fa1bc04caff7aa2bcbc384decd7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341151  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key ...
	I1123 10:16:19.341173  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key: {Name:mk7c8f5756d2d24a341f272a1597aebf84673b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341385  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:16:19.341439  356138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:16:19.341456  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:16:19.341495  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:16:19.341530  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:16:19.341573  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:16:19.341632  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:19.342348  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:16:19.363830  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:16:19.385303  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:16:19.406023  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:16:19.433442  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:16:19.463003  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:16:19.482783  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:16:19.500070  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:16:19.520265  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:16:19.541432  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:16:19.559861  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:16:19.581528  356138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:16:19.597355  356138 ssh_runner.go:195] Run: openssl version
	I1123 10:16:19.604898  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:16:19.614800  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619006  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619057  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.654890  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:16:19.664327  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:16:19.673063  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676814  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676871  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.721797  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:16:19.730991  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:16:19.739616  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743418  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743475  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.777638  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:16:19.787103  356138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:16:19.790766  356138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:16:19.790816  356138 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:19.790901  356138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:16:19.790939  356138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:16:19.819126  356138 cri.go:89] found id: ""
	I1123 10:16:19.819202  356138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:16:19.827259  356138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:16:19.835053  356138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:16:19.835138  356138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:16:19.842912  356138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:16:19.842928  356138 kubeadm.go:158] found existing configuration files:
	
	I1123 10:16:19.842967  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:16:19.850209  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:16:19.850251  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:16:19.857884  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:16:19.866646  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:16:19.866697  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:16:19.874327  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.881762  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:16:19.881807  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.889164  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:16:19.896714  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:16:19.896758  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:16:19.904290  356138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:16:19.943603  356138 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:16:19.943708  356138 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:16:19.965048  356138 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:16:19.965154  356138 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:16:19.965246  356138 kubeadm.go:319] OS: Linux
	I1123 10:16:19.965327  356138 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:16:19.965405  356138 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:16:19.965481  356138 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:16:19.965573  356138 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:16:19.965644  356138 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:16:19.965732  356138 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:16:19.965823  356138 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:16:19.965891  356138 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:16:20.026266  356138 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:16:20.026438  356138 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:16:20.026607  356138 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:16:20.033615  356138 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:16:19.189076  344952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:19.601794  344952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:20.183417  344952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:20.185182  344952 kubeadm.go:319] 
	I1123 10:16:20.185298  344952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:20.185319  344952 kubeadm.go:319] 
	I1123 10:16:20.185397  344952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:20.185409  344952 kubeadm.go:319] 
	I1123 10:16:20.185430  344952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:20.185517  344952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:20.185598  344952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:20.185607  344952 kubeadm.go:319] 
	I1123 10:16:20.185682  344952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:20.185690  344952 kubeadm.go:319] 
	I1123 10:16:20.185750  344952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:20.185764  344952 kubeadm.go:319] 
	I1123 10:16:20.185817  344952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:20.185945  344952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:20.186023  344952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:20.186032  344952 kubeadm.go:319] 
	I1123 10:16:20.186178  344952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:20.186301  344952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:20.186313  344952 kubeadm.go:319] 
	I1123 10:16:20.186423  344952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.186578  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:20.186625  344952 kubeadm.go:319] 	--control-plane 
	I1123 10:16:20.186634  344952 kubeadm.go:319] 
	I1123 10:16:20.186761  344952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:20.186780  344952 kubeadm.go:319] 
	I1123 10:16:20.186885  344952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.187030  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:20.189698  344952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:20.189890  344952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:20.189921  344952 cni.go:84] Creating CNI manager for ""
	I1123 10:16:20.189943  344952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:20.192370  344952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:18.007511  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:20.508070  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:18.934624  344706 addons.go:530] duration metric: took 681.995047ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:19.151704  344706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-990757" context rescaled to 1 replicas
	W1123 10:16:20.652483  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:23.151550  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:20.035950  356138 out.go:252]   - Generating certificates and keys ...
	I1123 10:16:20.036023  356138 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:16:20.036138  356138 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:16:20.199227  356138 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:20.296867  356138 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:20.649116  356138 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:20.853583  356138 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:21.223354  356138 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:21.223524  356138 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.589454  356138 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:21.589601  356138 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.712733  356138 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:22.231370  356138 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:22.493251  356138 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:22.493387  356138 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:22.795558  356138 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:22.972083  356138 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:23.034642  356138 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:23.345102  356138 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:23.769569  356138 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:23.770179  356138 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:23.773491  356138 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:20.193529  344952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:20.198365  344952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:20.198385  344952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:20.211881  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:20.437045  344952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:20.437128  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.437165  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-541522 minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-541522 minikube.k8s.io/primary=true
	I1123 10:16:20.561626  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.561779  344952 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:21.061993  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:21.561692  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.061999  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.561862  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.062326  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.561744  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.775519  356138 out.go:252]   - Booting up control plane ...
	I1123 10:16:23.775641  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:23.775760  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:23.775870  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:23.790389  356138 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:23.790543  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:23.797027  356138 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:23.797353  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:23.797453  356138 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:23.917379  356138 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:23.917528  356138 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:24.062736  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.562369  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.632270  344952 kubeadm.go:1114] duration metric: took 4.195217058s to wait for elevateKubeSystemPrivileges
	I1123 10:16:24.632308  344952 kubeadm.go:403] duration metric: took 16.142295896s to StartCluster
	I1123 10:16:24.632326  344952 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.632400  344952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:24.633884  344952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.634150  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:24.634179  344952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:24.634251  344952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:24.634355  344952 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:16:24.634368  344952 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:16:24.634377  344952 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	I1123 10:16:24.634388  344952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	I1123 10:16:24.634410  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.634455  344952 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:24.634764  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.634912  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.635539  344952 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:24.636521  344952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:24.657418  344952 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	I1123 10:16:24.657470  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.657938  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.658491  344952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:24.659646  344952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.659666  344952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:24.659724  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.685525  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.690195  344952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.690219  344952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:24.690298  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.724298  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.750701  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:24.796123  344952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:24.848328  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.848334  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.923983  344952 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:24.925356  344952 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:25.228703  344952 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 10:16:23.006965  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.008124  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.154186  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:27.651716  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:25.229824  344952 addons.go:530] duration metric: took 595.565525ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:16:25.428798  344952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-541522" context rescaled to 1 replicas
	W1123 10:16:26.929589  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:24.918996  356138 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001753375s
	I1123 10:16:24.925621  356138 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:24.925735  356138 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1123 10:16:24.925858  356138 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:24.925971  356138 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:26.512191  356138 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.587992193s
	I1123 10:16:27.081491  356138 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.157460492s
	I1123 10:16:28.925636  356138 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001590433s
	I1123 10:16:28.937425  356138 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:28.947025  356138 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:28.955505  356138 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:28.955787  356138 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-412306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:28.963030  356138 kubeadm.go:319] [bootstrap-token] Using token: 2diej7.g3irisej2sfcnkox
	I1123 10:16:28.965317  356138 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:28.965442  356138 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:28.968022  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:28.973224  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:28.975951  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:28.978262  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:28.981645  356138 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:29.331628  356138 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:29.745711  356138 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:30.331119  356138 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:30.331918  356138 kubeadm.go:319] 
	I1123 10:16:30.332036  356138 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:30.332056  356138 kubeadm.go:319] 
	I1123 10:16:30.332201  356138 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:30.332221  356138 kubeadm.go:319] 
	I1123 10:16:30.332275  356138 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:30.332347  356138 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:30.332408  356138 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:30.332416  356138 kubeadm.go:319] 
	I1123 10:16:30.332478  356138 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:30.332486  356138 kubeadm.go:319] 
	I1123 10:16:30.332540  356138 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:30.332548  356138 kubeadm.go:319] 
	I1123 10:16:30.332612  356138 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:30.332708  356138 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:30.332818  356138 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:30.332837  356138 kubeadm.go:319] 
	I1123 10:16:30.332958  356138 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:30.333060  356138 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:30.333076  356138 kubeadm.go:319] 
	I1123 10:16:30.333211  356138 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333342  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:30.333366  356138 kubeadm.go:319] 	--control-plane 
	I1123 10:16:30.333375  356138 kubeadm.go:319] 
	I1123 10:16:30.333446  356138 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:30.333451  356138 kubeadm.go:319] 
	I1123 10:16:30.333535  356138 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333651  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:30.336224  356138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:30.336339  356138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:30.336389  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:30.336405  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:30.401160  356138 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:27.506801  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.507199  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.651902  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:32.152208  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:32.651044  344706 node_ready.go:49] node "old-k8s-version-990757" is "Ready"
	I1123 10:16:32.651072  344706 node_ready.go:38] duration metric: took 14.002600443s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:32.651103  344706 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:32.651154  344706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:32.664668  344706 api_server.go:72] duration metric: took 14.412040415s to wait for apiserver process to appear ...
	I1123 10:16:32.664699  344706 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:32.664734  344706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:16:32.671045  344706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:16:32.672175  344706 api_server.go:141] control plane version: v1.28.0
	I1123 10:16:32.672198  344706 api_server.go:131] duration metric: took 7.493612ms to wait for apiserver health ...
	I1123 10:16:32.672206  344706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:32.675396  344706 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:32.675423  344706 system_pods.go:61] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.675429  344706 system_pods.go:61] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.675438  344706 system_pods.go:61] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.675442  344706 system_pods.go:61] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.675446  344706 system_pods.go:61] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.675455  344706 system_pods.go:61] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.675461  344706 system_pods.go:61] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.675466  344706 system_pods.go:61] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.675474  344706 system_pods.go:74] duration metric: took 3.26216ms to wait for pod list to return data ...
	I1123 10:16:32.675483  344706 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:32.677500  344706 default_sa.go:45] found service account: "default"
	I1123 10:16:32.677517  344706 default_sa.go:55] duration metric: took 2.029784ms for default service account to be created ...
	I1123 10:16:32.677525  344706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:32.680674  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.680700  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.680707  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.680719  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.680730  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.680736  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.680745  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.680751  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.680760  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.680799  344706 retry.go:31] will retry after 291.35829ms: missing components: kube-dns
	I1123 10:16:32.977121  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.977154  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.977161  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.977168  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.977172  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.977176  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.977188  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.977195  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.977199  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.977215  344706 retry.go:31] will retry after 325.371921ms: missing components: kube-dns
	I1123 10:16:33.307183  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.307222  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:33.307228  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.307234  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.307237  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.307241  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.307244  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.307253  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.307257  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:33.307274  344706 retry.go:31] will retry after 477.295588ms: missing components: kube-dns
	W1123 10:16:29.428459  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:31.428879  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:33.429049  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:30.402276  356138 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:30.407016  356138 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:30.407034  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:30.424045  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:30.638241  356138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:30.638352  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:30.638388  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-412306 minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-412306 minikube.k8s.io/primary=true
	I1123 10:16:30.648402  356138 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:30.709488  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.210134  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.710498  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.209893  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.709530  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.209575  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.709563  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.210241  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.709746  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.210264  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.283600  356138 kubeadm.go:1114] duration metric: took 4.64531381s to wait for elevateKubeSystemPrivileges
	I1123 10:16:35.283643  356138 kubeadm.go:403] duration metric: took 15.49282887s to StartCluster
	I1123 10:16:35.283665  356138 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.283762  356138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:35.285869  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.286180  356138 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:35.286331  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:35.286610  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:35.286435  356138 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:35.286707  356138 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:16:35.286812  356138 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	I1123 10:16:35.286885  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.286746  356138 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:16:35.287011  356138 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:16:35.287600  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.287780  356138 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:35.288910  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.289524  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:35.314640  356138 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	I1123 10:16:35.314789  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.315364  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.316039  356138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:33.788957  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.788988  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Running
	I1123 10:16:33.788994  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.788997  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.789001  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.789006  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.789009  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.789013  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.789017  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Running
	I1123 10:16:33.789025  344706 system_pods.go:126] duration metric: took 1.111493702s to wait for k8s-apps to be running ...
	I1123 10:16:33.789036  344706 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:33.789083  344706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:33.801872  344706 system_svc.go:56] duration metric: took 12.824145ms WaitForService to wait for kubelet
	I1123 10:16:33.801901  344706 kubeadm.go:587] duration metric: took 15.549282124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:33.801917  344706 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:33.804486  344706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:33.804512  344706 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:33.804532  344706 node_conditions.go:105] duration metric: took 2.608231ms to run NodePressure ...
	I1123 10:16:33.804549  344706 start.go:242] waiting for startup goroutines ...
	I1123 10:16:33.804563  344706 start.go:247] waiting for cluster config update ...
	I1123 10:16:33.804579  344706 start.go:256] writing updated cluster config ...
	I1123 10:16:33.804859  344706 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:33.808438  344706 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:33.812221  344706 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.816745  344706 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:16:33.816770  344706 pod_ready.go:86] duration metric: took 4.52627ms for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.819363  344706 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.823014  344706 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.823034  344706 pod_ready.go:86] duration metric: took 3.64929ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.825305  344706 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.830141  344706 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.830162  344706 pod_ready.go:86] duration metric: took 4.841585ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.832571  344706 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.213051  344706 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:16:34.213110  344706 pod_ready.go:86] duration metric: took 380.4924ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.413069  344706 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.813198  344706 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:16:34.813228  344706 pod_ready.go:86] duration metric: took 400.102635ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.012747  344706 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412818  344706 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:16:35.412845  344706 pod_ready.go:86] duration metric: took 400.068338ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412857  344706 pod_ready.go:40] duration metric: took 1.604388715s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:35.469188  344706 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:16:35.510336  344706 out.go:203] 
	W1123 10:16:35.512291  344706 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:16:35.513439  344706 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:16:35.514923  344706 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	I1123 10:16:35.317954  356138 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.317987  356138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:35.318441  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.340962  356138 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.340989  356138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:35.341107  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.347702  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.369097  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.375674  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:35.442865  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:35.465653  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.487123  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.561205  356138 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:35.562463  356138 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:16:35.788632  356138 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:16:32.005830  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:34.006310  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:36.007382  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:35.430057  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:37.929223  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:35.789494  356138 addons.go:530] duration metric: took 503.064926ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:36.066022  356138 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-412306" context rescaled to 1 replicas
	W1123 10:16:37.565650  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:38.507551  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:41.006771  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:38.928775  344952 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:16:38.928809  344952 node_ready.go:38] duration metric: took 14.003414343s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:38.928827  344952 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:38.928893  344952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:38.941967  344952 api_server.go:72] duration metric: took 14.30774812s to wait for apiserver process to appear ...
	I1123 10:16:38.941992  344952 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:38.942007  344952 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:16:38.946871  344952 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:16:38.947779  344952 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:38.947803  344952 api_server.go:131] duration metric: took 5.806056ms to wait for apiserver health ...
	I1123 10:16:38.947811  344952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:38.951278  344952 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:38.951306  344952 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.951313  344952 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.951318  344952 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.951322  344952 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.951328  344952 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.951333  344952 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.951337  344952 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.951341  344952 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.951347  344952 system_pods.go:74] duration metric: took 3.530661ms to wait for pod list to return data ...
	I1123 10:16:38.951356  344952 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:38.953395  344952 default_sa.go:45] found service account: "default"
	I1123 10:16:38.953416  344952 default_sa.go:55] duration metric: took 2.05549ms for default service account to be created ...
	I1123 10:16:38.953424  344952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:38.955705  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:38.955729  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.955735  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.955743  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.955749  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.955755  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.955766  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.955774  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.955785  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.955807  344952 retry.go:31] will retry after 286.541435ms: missing components: kube-dns
	I1123 10:16:39.246793  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.246834  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:39.246842  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.246850  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.246855  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.246861  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.246866  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.246876  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.246889  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:39.246907  344952 retry.go:31] will retry after 342.610222ms: missing components: kube-dns
	I1123 10:16:39.594146  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.594183  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running
	I1123 10:16:39.594196  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.594200  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.594204  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.594210  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.594215  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.594220  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.594226  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running
	I1123 10:16:39.594236  344952 system_pods.go:126] duration metric: took 640.805319ms to wait for k8s-apps to be running ...
	I1123 10:16:39.594250  344952 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:39.594310  344952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:39.608983  344952 system_svc.go:56] duration metric: took 14.722696ms WaitForService to wait for kubelet
	I1123 10:16:39.609015  344952 kubeadm.go:587] duration metric: took 14.97480089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:39.609037  344952 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:39.611842  344952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:39.611865  344952 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:39.611882  344952 node_conditions.go:105] duration metric: took 2.839945ms to run NodePressure ...
	I1123 10:16:39.611895  344952 start.go:242] waiting for startup goroutines ...
	I1123 10:16:39.611908  344952 start.go:247] waiting for cluster config update ...
	I1123 10:16:39.611919  344952 start.go:256] writing updated cluster config ...
	I1123 10:16:39.612185  344952 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:39.616031  344952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:39.619510  344952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.623392  344952 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:16:39.623415  344952 pod_ready.go:86] duration metric: took 3.869312ms for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.625265  344952 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.628641  344952 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:16:39.628659  344952 pod_ready.go:86] duration metric: took 3.374871ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.630356  344952 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.633564  344952 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:16:39.633587  344952 pod_ready.go:86] duration metric: took 3.21019ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.635340  344952 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.020259  344952 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:16:40.020290  344952 pod_ready.go:86] duration metric: took 384.929039ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.220795  344952 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.620970  344952 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:16:40.621002  344952 pod_ready.go:86] duration metric: took 400.183007ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.819960  344952 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219866  344952 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:16:41.219893  344952 pod_ready.go:86] duration metric: took 399.908601ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219905  344952 pod_ready.go:40] duration metric: took 1.603850974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:41.264158  344952 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:41.265945  344952 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	I1123 10:16:42.506018  341630 pod_ready.go:94] pod "coredns-66bc5c9577-p6sw2" is "Ready"
	I1123 10:16:42.506054  341630 pod_ready.go:86] duration metric: took 31.004987147s for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.508459  341630 pod_ready.go:83] waiting for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.514192  341630 pod_ready.go:94] pod "etcd-bridge-791161" is "Ready"
	I1123 10:16:42.514218  341630 pod_ready.go:86] duration metric: took 5.738216ms for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.516115  341630 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.519705  341630 pod_ready.go:94] pod "kube-apiserver-bridge-791161" is "Ready"
	I1123 10:16:42.519724  341630 pod_ready.go:86] duration metric: took 3.591711ms for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.521450  341630 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.704830  341630 pod_ready.go:94] pod "kube-controller-manager-bridge-791161" is "Ready"
	I1123 10:16:42.704859  341630 pod_ready.go:86] duration metric: took 183.390224ms for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.905328  341630 pod_ready.go:83] waiting for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.304355  341630 pod_ready.go:94] pod "kube-proxy-sn6s2" is "Ready"
	I1123 10:16:43.304382  341630 pod_ready.go:86] duration metric: took 399.024239ms for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.504607  341630 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905001  341630 pod_ready.go:94] pod "kube-scheduler-bridge-791161" is "Ready"
	I1123 10:16:43.905030  341630 pod_ready.go:86] duration metric: took 400.39674ms for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905043  341630 pod_ready.go:40] duration metric: took 32.407876329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:43.960235  341630 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:43.961459  341630 out.go:179] * Done! kubectl is now configured to use "bridge-791161" cluster and "default" namespace by default
	W1123 10:16:40.065837  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:42.565358  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:45.068207  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	I1123 10:16:46.568628  356138 node_ready.go:49] node "embed-certs-412306" is "Ready"
	I1123 10:16:46.568656  356138 node_ready.go:38] duration metric: took 11.006153698s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:16:46.568672  356138 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:46.568716  356138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:46.582933  356138 api_server.go:72] duration metric: took 11.296710961s to wait for apiserver process to appear ...
	I1123 10:16:46.582964  356138 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:46.582989  356138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:16:46.588509  356138 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 10:16:46.589515  356138 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:46.589535  356138 api_server.go:131] duration metric: took 6.56399ms to wait for apiserver health ...
	I1123 10:16:46.589544  356138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:46.592533  356138 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:46.592562  356138 system_pods.go:61] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.592569  356138 system_pods.go:61] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.592578  356138 system_pods.go:61] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.592587  356138 system_pods.go:61] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.592593  356138 system_pods.go:61] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.592602  356138 system_pods.go:61] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.592607  356138 system_pods.go:61] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.592620  356138 system_pods.go:61] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.592631  356138 system_pods.go:74] duration metric: took 3.080482ms to wait for pod list to return data ...
	I1123 10:16:46.592641  356138 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:46.595192  356138 default_sa.go:45] found service account: "default"
	I1123 10:16:46.595213  356138 default_sa.go:55] duration metric: took 2.563019ms for default service account to be created ...
	I1123 10:16:46.595223  356138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:46.597828  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:46.597856  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.597863  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.597870  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.597876  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.597887  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.597892  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.597898  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.597905  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.597942  356138 retry.go:31] will retry after 236.958803ms: missing components: kube-dns
	I1123 10:16:46.840195  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:46.840241  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.840254  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.840283  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.840293  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.840304  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.840309  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.840317  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.840326  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.840352  356138 retry.go:31] will retry after 288.634662ms: missing components: kube-dns
	I1123 10:16:47.133783  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:47.133825  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:47.133834  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:47.133844  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:47.133850  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:47.133855  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:47.133861  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:47.133866  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:47.133874  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:47.133895  356138 retry.go:31] will retry after 329.106738ms: missing components: kube-dns
	I1123 10:16:47.467403  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:47.467456  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:47.467465  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:47.467474  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:47.467480  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:47.467486  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:47.467498  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:47.467504  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:47.467516  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:47.467545  356138 retry.go:31] will retry after 556.171915ms: missing components: kube-dns
	I1123 10:16:48.028184  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:48.028230  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running
	I1123 10:16:48.028239  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:48.028244  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:48.028248  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:48.028252  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:48.028255  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:48.028259  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:48.028262  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:16:48.028270  356138 system_pods.go:126] duration metric: took 1.433040723s to wait for k8s-apps to be running ...
	I1123 10:16:48.028279  356138 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:48.028322  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:48.041305  356138 system_svc.go:56] duration metric: took 13.015993ms WaitForService to wait for kubelet
	I1123 10:16:48.041336  356138 kubeadm.go:587] duration metric: took 12.755118682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:48.041361  356138 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:48.044390  356138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:48.044420  356138 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:48.044439  356138 node_conditions.go:105] duration metric: took 3.072771ms to run NodePressure ...
	I1123 10:16:48.044457  356138 start.go:242] waiting for startup goroutines ...
	I1123 10:16:48.044471  356138 start.go:247] waiting for cluster config update ...
	I1123 10:16:48.044488  356138 start.go:256] writing updated cluster config ...
	I1123 10:16:48.044772  356138 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:48.048532  356138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:48.051926  356138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.056287  356138 pod_ready.go:94] pod "coredns-66bc5c9577-fxl7j" is "Ready"
	I1123 10:16:48.056323  356138 pod_ready.go:86] duration metric: took 4.377095ms for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.058178  356138 pod_ready.go:83] waiting for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.061689  356138 pod_ready.go:94] pod "etcd-embed-certs-412306" is "Ready"
	I1123 10:16:48.061711  356138 pod_ready.go:86] duration metric: took 3.514207ms for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.063466  356138 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.067063  356138 pod_ready.go:94] pod "kube-apiserver-embed-certs-412306" is "Ready"
	I1123 10:16:48.067080  356138 pod_ready.go:86] duration metric: took 3.595858ms for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.069048  356138 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.452780  356138 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412306" is "Ready"
	I1123 10:16:48.452805  356138 pod_ready.go:86] duration metric: took 383.73999ms for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.653743  356138 pod_ready.go:83] waiting for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.052970  356138 pod_ready.go:94] pod "kube-proxy-2vnjq" is "Ready"
	I1123 10:16:49.052998  356138 pod_ready.go:86] duration metric: took 399.22677ms for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.253502  356138 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.652551  356138 pod_ready.go:94] pod "kube-scheduler-embed-certs-412306" is "Ready"
	I1123 10:16:49.652578  356138 pod_ready.go:86] duration metric: took 399.044168ms for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.652589  356138 pod_ready.go:40] duration metric: took 1.604029447s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:49.695575  356138 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:49.697240  356138 out.go:179] * Done! kubectl is now configured to use "embed-certs-412306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:16:38 no-preload-541522 crio[769]: time="2025-11-23T10:16:38.856044338Z" level=info msg="Starting container: df8cf72b768b89dccf5f3663fec34509d41f7ccf631e3700919b427dd256d70a" id=8178abe3-e5d8-497c-9c26-70e99a0df87b name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:38 no-preload-541522 crio[769]: time="2025-11-23T10:16:38.857992086Z" level=info msg="Started container" PID=2839 containerID=df8cf72b768b89dccf5f3663fec34509d41f7ccf631e3700919b427dd256d70a description=kube-system/coredns-66bc5c9577-krmwt/coredns id=8178abe3-e5d8-497c-9c26-70e99a0df87b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8e21f040a6b9a379b4b77d82fbf8c4cb1255587291600bca44b98956a6f4e83
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.713882699Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d73b5ab3-ff53-489b-b849-c8d59499ecfd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.713955856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.719336763Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2e78d1d8b2a2207c677d19cb6e0c799497740783fb5e9e63b37fe8ac44991c54 UID:ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be NetNS:/var/run/netns/6da93856-3adf-493b-96a1-5603b56d52d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000825018}] Aliases:map[]}"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.719370862Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.729308968Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2e78d1d8b2a2207c677d19cb6e0c799497740783fb5e9e63b37fe8ac44991c54 UID:ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be NetNS:/var/run/netns/6da93856-3adf-493b-96a1-5603b56d52d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000825018}] Aliases:map[]}"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.729449338Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.730199548Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.731417677Z" level=info msg="Ran pod sandbox 2e78d1d8b2a2207c677d19cb6e0c799497740783fb5e9e63b37fe8ac44991c54 with infra container: default/busybox/POD" id=d73b5ab3-ff53-489b-b849-c8d59499ecfd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.732601464Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5830d1db-d59a-4c88-9958-02391d839548 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.732695815Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5830d1db-d59a-4c88-9958-02391d839548 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.732722674Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5830d1db-d59a-4c88-9958-02391d839548 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.733242127Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e101caca-c9f6-475a-a917-f716513a1312 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:41 no-preload-541522 crio[769]: time="2025-11-23T10:16:41.734576317Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:16:43 no-preload-541522 crio[769]: time="2025-11-23T10:16:43.991326679Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e101caca-c9f6-475a-a917-f716513a1312 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:43 no-preload-541522 crio[769]: time="2025-11-23T10:16:43.99206861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b03c66d-5422-4f01-bf14-a53b86b01670 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:43 no-preload-541522 crio[769]: time="2025-11-23T10:16:43.993865757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=086756d7-3c04-4fc3-8a17-bbc3fcc03f4c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:43 no-preload-541522 crio[769]: time="2025-11-23T10:16:43.997055504Z" level=info msg="Creating container: default/busybox/busybox" id=89872626-0884-481f-8481-1557fb706efd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:43 no-preload-541522 crio[769]: time="2025-11-23T10:16:43.997337033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:44 no-preload-541522 crio[769]: time="2025-11-23T10:16:44.001039075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:44 no-preload-541522 crio[769]: time="2025-11-23T10:16:44.001512988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:44 no-preload-541522 crio[769]: time="2025-11-23T10:16:44.031436737Z" level=info msg="Created container 1d5d5e0ad9d0191e40162b719e1b8b68b0116aa35c7e456ec42e8aa72881bfda: default/busybox/busybox" id=89872626-0884-481f-8481-1557fb706efd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:44 no-preload-541522 crio[769]: time="2025-11-23T10:16:44.032077794Z" level=info msg="Starting container: 1d5d5e0ad9d0191e40162b719e1b8b68b0116aa35c7e456ec42e8aa72881bfda" id=b07d7e29-ae9e-44f1-a3f1-4a709c25abf9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:44 no-preload-541522 crio[769]: time="2025-11-23T10:16:44.033731767Z" level=info msg="Started container" PID=2915 containerID=1d5d5e0ad9d0191e40162b719e1b8b68b0116aa35c7e456ec42e8aa72881bfda description=default/busybox/busybox id=b07d7e29-ae9e-44f1-a3f1-4a709c25abf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e78d1d8b2a2207c677d19cb6e0c799497740783fb5e9e63b37fe8ac44991c54
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1d5d5e0ad9d01       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   2e78d1d8b2a22       busybox                                     default
	df8cf72b768b8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   e8e21f040a6b9       coredns-66bc5c9577-krmwt                    kube-system
	46063f5f9d579       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   ddd5b4b20901b       storage-provisioner                         kube-system
	2bd592b233220       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   43f5e6b36eab5       kindnet-9vppw                               kube-system
	6bf86a68b6bc4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      27 seconds ago      Running             kube-proxy                0                   30a2feefceef7       kube-proxy-sllct                            kube-system
	7eba9f56088c5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      37 seconds ago      Running             kube-controller-manager   0                   e10748d9e329e       kube-controller-manager-no-preload-541522   kube-system
	7ce4b4a0b29e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      38 seconds ago      Running             etcd                      0                   593a6f4a56c96       etcd-no-preload-541522                      kube-system
	8d5bfb115d3dd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      38 seconds ago      Running             kube-scheduler            0                   831ab3f7d32ac       kube-scheduler-no-preload-541522            kube-system
	219f42f76fb5d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      38 seconds ago      Running             kube-apiserver            0                   ac06d64b5a7ec       kube-apiserver-no-preload-541522            kube-system
	
	
	==> coredns [df8cf72b768b89dccf5f3663fec34509d41f7ccf631e3700919b427dd256d70a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49679 - 53024 "HINFO IN 3043628096879467340.1171327937767677560. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03872191s
	
	
	==> describe nodes <==
	Name:               no-preload-541522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-541522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-541522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-541522
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:16:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:16:50 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:16:50 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:16:50 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:16:50 +0000   Sun, 23 Nov 2025 10:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-541522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9eef6a41-5317-48ee-8389-6d173ebb4813
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-krmwt                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-541522                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-9vppw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-541522             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-541522    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-sllct                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-541522             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-541522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-541522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-541522 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-541522 event: Registered Node no-preload-541522 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-541522 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [7ce4b4a0b29e5716baf73b4ebc070289b8208dfe1fa24551f534ce51a6f3ae35] <==
	{"level":"warn","ts":"2025-11-23T10:16:16.223307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.237460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.244959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.252608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.260293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.267353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.273889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.280345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.286660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.296204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.302190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.307974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.314136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.320136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.326188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.332741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.339291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.345410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.351340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.358527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.364683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.385469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.391826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.398942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:16.451526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35558","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:16:53 up  2:59,  0 user,  load average: 6.99, 5.27, 2.88
	Linux no-preload-541522 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2bd592b233220dc1942d701aafc640d7be2373c9967251fa42a74dc23603f4c5] <==
	I1123 10:16:27.878436       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:16:27.878686       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:16:27.878810       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:16:27.878824       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:16:27.878842       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:16:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:16:28.080083       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:16:28.080179       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:16:28.080192       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:16:28.080423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:16:28.550979       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:16:28.551009       1 metrics.go:72] Registering metrics
	I1123 10:16:28.551153       1 controller.go:711] "Syncing nftables rules"
	I1123 10:16:38.082271       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:16:38.082357       1 main.go:301] handling current node
	I1123 10:16:48.079801       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:16:48.079834       1 main.go:301] handling current node
	
	
	==> kube-apiserver [219f42f76fb5dd29235861b9c1ca937feb9e71046d2e418de76d19f32caf4ca5] <==
	I1123 10:16:16.967400       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 10:16:16.971822       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:16.972267       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:16:16.977627       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:16.978358       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:16:17.156519       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:16:17.867598       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:16:17.872818       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:16:17.872838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:16:18.468405       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:16:18.513150       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:16:18.571790       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:16:18.579837       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:16:18.581383       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:16:18.588083       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:16:18.916921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:16:19.590634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:16:19.600956       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:16:19.607736       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:16:24.668793       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:16:24.668819       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:16:24.725163       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:24.734596       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:24.968306       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 10:16:51.500446       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55698: use of closed network connection
	
	
	==> kube-controller-manager [7eba9f56088c5da9c79ed986f97d788681eb258771ec93129c7d66ba5ac29b0e] <==
	I1123 10:16:23.914822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:16:23.914884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:16:23.915001       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:16:23.915203       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:16:23.915323       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:16:23.915394       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:16:23.915405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:16:23.915522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:16:23.915585       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:16:23.915896       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:16:23.918767       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:16:23.918774       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:16:23.918815       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:16:23.918838       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:16:23.918847       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:16:23.918854       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:16:23.918874       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:16:23.923054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:16:23.923070       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:16:23.923078       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:16:23.924132       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:16:23.925385       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-541522" podCIDRs=["10.244.0.0/24"]
	I1123 10:16:23.936402       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:16:23.941685       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:16:38.915130       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6bf86a68b6bc4344d38c074bb2cc659c8d0845d4e549a21a065cd0019e1031e5] <==
	I1123 10:16:25.161585       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:16:25.238365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:16:25.338500       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:16:25.338551       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:16:25.338682       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:16:25.357415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:16:25.357466       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:16:25.362787       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:16:25.363315       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:16:25.363363       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:16:25.366889       1 config.go:200] "Starting service config controller"
	I1123 10:16:25.366910       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:16:25.366944       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:16:25.366950       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:16:25.367029       1 config.go:309] "Starting node config controller"
	I1123 10:16:25.367040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:16:25.367046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:16:25.367040       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:16:25.367057       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:16:25.467015       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:16:25.467120       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:16:25.467140       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d5bfb115d3ddefd176c0ca08051c47d9da55561c9d4ab358eaa02d0e115a394] <==
	E1123 10:16:16.919390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:16:16.919498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:16:16.919902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:16:16.920072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:16:16.920110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:16:16.920220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:16:16.920220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:16:16.920833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:16:16.920838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:16:16.920866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:16:16.920868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:16:16.920926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:16:16.921010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:16:16.921027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:16:16.921033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:16:17.755140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:16:17.828746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:16:17.852455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:16:17.871797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:16:17.883893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:16:18.027535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:16:18.100948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:16:18.202509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:16:18.205512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1123 10:16:21.117079       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:16:20 no-preload-541522 kubelet[2238]: I1123 10:16:20.525220    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-541522" podStartSLOduration=1.525202715 podStartE2EDuration="1.525202715s" podCreationTimestamp="2025-11-23 10:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:20.524855198 +0000 UTC m=+1.173930078" watchObservedRunningTime="2025-11-23 10:16:20.525202715 +0000 UTC m=+1.174277576"
	Nov 23 10:16:20 no-preload-541522 kubelet[2238]: I1123 10:16:20.542223    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-541522" podStartSLOduration=1.542203243 podStartE2EDuration="1.542203243s" podCreationTimestamp="2025-11-23 10:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:20.542159518 +0000 UTC m=+1.191234399" watchObservedRunningTime="2025-11-23 10:16:20.542203243 +0000 UTC m=+1.191278123"
	Nov 23 10:16:20 no-preload-541522 kubelet[2238]: I1123 10:16:20.577444    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-541522" podStartSLOduration=1.577421352 podStartE2EDuration="1.577421352s" podCreationTimestamp="2025-11-23 10:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:20.557553493 +0000 UTC m=+1.206628374" watchObservedRunningTime="2025-11-23 10:16:20.577421352 +0000 UTC m=+1.226496246"
	Nov 23 10:16:23 no-preload-541522 kubelet[2238]: I1123 10:16:23.969349    2238 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:16:23 no-preload-541522 kubelet[2238]: I1123 10:16:23.970033    2238 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767160    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b98e7a4-34e9-46af-97a1-764b6ed92ec6-xtables-lock\") pod \"kindnet-9vppw\" (UID: \"3b98e7a4-34e9-46af-97a1-764b6ed92ec6\") " pod="kube-system/kindnet-9vppw"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767227    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b98e7a4-34e9-46af-97a1-764b6ed92ec6-lib-modules\") pod \"kindnet-9vppw\" (UID: \"3b98e7a4-34e9-46af-97a1-764b6ed92ec6\") " pod="kube-system/kindnet-9vppw"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767285    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b13417-4bca-4ec1-8e60-cf5016aa28ca-lib-modules\") pod \"kube-proxy-sllct\" (UID: \"c5b13417-4bca-4ec1-8e60-cf5016aa28ca\") " pod="kube-system/kube-proxy-sllct"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767309    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3b98e7a4-34e9-46af-97a1-764b6ed92ec6-cni-cfg\") pod \"kindnet-9vppw\" (UID: \"3b98e7a4-34e9-46af-97a1-764b6ed92ec6\") " pod="kube-system/kindnet-9vppw"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767335    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbgx\" (UniqueName: \"kubernetes.io/projected/3b98e7a4-34e9-46af-97a1-764b6ed92ec6-kube-api-access-gxbgx\") pod \"kindnet-9vppw\" (UID: \"3b98e7a4-34e9-46af-97a1-764b6ed92ec6\") " pod="kube-system/kindnet-9vppw"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767418    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5b13417-4bca-4ec1-8e60-cf5016aa28ca-kube-proxy\") pod \"kube-proxy-sllct\" (UID: \"c5b13417-4bca-4ec1-8e60-cf5016aa28ca\") " pod="kube-system/kube-proxy-sllct"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767458    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b13417-4bca-4ec1-8e60-cf5016aa28ca-xtables-lock\") pod \"kube-proxy-sllct\" (UID: \"c5b13417-4bca-4ec1-8e60-cf5016aa28ca\") " pod="kube-system/kube-proxy-sllct"
	Nov 23 10:16:24 no-preload-541522 kubelet[2238]: I1123 10:16:24.767479    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7x7\" (UniqueName: \"kubernetes.io/projected/c5b13417-4bca-4ec1-8e60-cf5016aa28ca-kube-api-access-jb7x7\") pod \"kube-proxy-sllct\" (UID: \"c5b13417-4bca-4ec1-8e60-cf5016aa28ca\") " pod="kube-system/kube-proxy-sllct"
	Nov 23 10:16:28 no-preload-541522 kubelet[2238]: I1123 10:16:28.504517    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sllct" podStartSLOduration=4.504495771 podStartE2EDuration="4.504495771s" podCreationTimestamp="2025-11-23 10:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:25.530319602 +0000 UTC m=+6.179394482" watchObservedRunningTime="2025-11-23 10:16:28.504495771 +0000 UTC m=+9.153570650"
	Nov 23 10:16:28 no-preload-541522 kubelet[2238]: I1123 10:16:28.504692    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9vppw" podStartSLOduration=1.882494643 podStartE2EDuration="4.504679277s" podCreationTimestamp="2025-11-23 10:16:24 +0000 UTC" firstStartedPulling="2025-11-23 10:16:25.049689616 +0000 UTC m=+5.698764496" lastFinishedPulling="2025-11-23 10:16:27.67187427 +0000 UTC m=+8.320949130" observedRunningTime="2025-11-23 10:16:28.504465677 +0000 UTC m=+9.153540556" watchObservedRunningTime="2025-11-23 10:16:28.504679277 +0000 UTC m=+9.153754159"
	Nov 23 10:16:38 no-preload-541522 kubelet[2238]: I1123 10:16:38.478150    2238 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:16:38 no-preload-541522 kubelet[2238]: I1123 10:16:38.567526    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvm68\" (UniqueName: \"kubernetes.io/projected/39101b53-5254-41f3-bac9-c711e67dc551-kube-api-access-kvm68\") pod \"coredns-66bc5c9577-krmwt\" (UID: \"39101b53-5254-41f3-bac9-c711e67dc551\") " pod="kube-system/coredns-66bc5c9577-krmwt"
	Nov 23 10:16:38 no-preload-541522 kubelet[2238]: I1123 10:16:38.567587    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/40eb99ea-9515-431c-888b-81826014f8a6-tmp\") pod \"storage-provisioner\" (UID: \"40eb99ea-9515-431c-888b-81826014f8a6\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:38 no-preload-541522 kubelet[2238]: I1123 10:16:38.567619    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87wc\" (UniqueName: \"kubernetes.io/projected/40eb99ea-9515-431c-888b-81826014f8a6-kube-api-access-j87wc\") pod \"storage-provisioner\" (UID: \"40eb99ea-9515-431c-888b-81826014f8a6\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:38 no-preload-541522 kubelet[2238]: I1123 10:16:38.567666    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39101b53-5254-41f3-bac9-c711e67dc551-config-volume\") pod \"coredns-66bc5c9577-krmwt\" (UID: \"39101b53-5254-41f3-bac9-c711e67dc551\") " pod="kube-system/coredns-66bc5c9577-krmwt"
	Nov 23 10:16:39 no-preload-541522 kubelet[2238]: I1123 10:16:39.528386    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-krmwt" podStartSLOduration=14.528365038 podStartE2EDuration="14.528365038s" podCreationTimestamp="2025-11-23 10:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:39.528066255 +0000 UTC m=+20.177141135" watchObservedRunningTime="2025-11-23 10:16:39.528365038 +0000 UTC m=+20.177439918"
	Nov 23 10:16:39 no-preload-541522 kubelet[2238]: I1123 10:16:39.546029    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.546007431 podStartE2EDuration="14.546007431s" podCreationTimestamp="2025-11-23 10:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:39.537178149 +0000 UTC m=+20.186253029" watchObservedRunningTime="2025-11-23 10:16:39.546007431 +0000 UTC m=+20.195082312"
	Nov 23 10:16:41 no-preload-541522 kubelet[2238]: I1123 10:16:41.485840    2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b669r\" (UniqueName: \"kubernetes.io/projected/ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be-kube-api-access-b669r\") pod \"busybox\" (UID: \"ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be\") " pod="default/busybox"
	Nov 23 10:16:44 no-preload-541522 kubelet[2238]: I1123 10:16:44.541148    2238 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.280733922 podStartE2EDuration="3.541126408s" podCreationTimestamp="2025-11-23 10:16:41 +0000 UTC" firstStartedPulling="2025-11-23 10:16:41.73289216 +0000 UTC m=+22.381967019" lastFinishedPulling="2025-11-23 10:16:43.993284631 +0000 UTC m=+24.642359505" observedRunningTime="2025-11-23 10:16:44.54091809 +0000 UTC m=+25.189992970" watchObservedRunningTime="2025-11-23 10:16:44.541126408 +0000 UTC m=+25.190201289"
	Nov 23 10:16:51 no-preload-541522 kubelet[2238]: E1123 10:16:51.500378    2238 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41324->127.0.0.1:45139: write tcp 127.0.0.1:41324->127.0.0.1:45139: write: broken pipe
	
	
	==> storage-provisioner [46063f5f9d5795beedd0600c6e2221adab0bb9ef33cd692757e5997ef4675a7b] <==
	I1123 10:16:38.864870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:16:38.873738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:16:38.873778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:16:38.875818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:38.880887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:16:38.881120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:16:38.881272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-541522_dfff5df7-c676-4e56-81d8-a481b1d628fe!
	I1123 10:16:38.881267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc2f647a-dbc0-4e88-bc5d-2f4e9ba1110c", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-541522_dfff5df7-c676-4e56-81d8-a481b1d628fe became leader
	W1123 10:16:38.883153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:38.886422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:16:38.982162       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-541522_dfff5df7-c676-4e56-81d8-a481b1d628fe!
	W1123 10:16:40.890154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:40.893931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:42.896699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:42.900823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:44.904319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:44.909102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:46.913100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:46.917302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:48.920334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:48.924283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:50.927246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:50.930842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:52.933959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:52.941304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-541522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.358849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:16:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-412306 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-412306 describe deploy/metrics-server -n kube-system: exit status 1 (58.369271ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-412306 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-412306
helpers_test.go:243: (dbg) docker inspect embed-certs-412306:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	        "Created": "2025-11-23T10:16:14.870430409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:16:14.919569939Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hosts",
	        "LogPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd-json.log",
	        "Name": "/embed-certs-412306",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-412306:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-412306",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	                "LowerDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-412306",
	                "Source": "/var/lib/docker/volumes/embed-certs-412306/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-412306",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-412306",
	                "name.minikube.sigs.k8s.io": "embed-certs-412306",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c84af3e47e5be3f455719ebf420b6dc42d8f808f8cd791e6123b8b601c33a963",
	            "SandboxKey": "/var/run/docker/netns/c84af3e47e5b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-412306": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "80c19d1f62c6174f298a861aa9911c5900bfe0857882aac57b7c600a7d06c5aa",
	                    "EndpointID": "df4bac4671995730e5752e585eda15d0a490d2c8abbcd9b22f677cb4092f9834",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ce:8f:ee:0c:d2:f1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-412306",
	                        "2363fe4602f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25: (1.074577426s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-791161 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cri-dockerd --version                                                                                                           │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p flannel-791161 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo containerd config dump                                                                                                          │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p flannel-791161 sudo crio config                                                                                                                     │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ delete  │ -p flannel-791161                                                                                                                                      │ flannel-791161         │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-412306     │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p bridge-791161 pgrep -a kubelet                                                                                                                      │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-990757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-990757 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ stop    │ -p old-k8s-version-990757 --alsologtostderr -v=3                                                                                                       │ old-k8s-version-990757 │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-541522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-541522      │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ stop    │ -p no-preload-541522 --alsologtostderr -v=3                                                                                                            │ no-preload-541522      │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo cat /etc/nsswitch.conf                                                                                                           │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/hosts                                                                                                                   │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-412306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain               │ embed-certs-412306     │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo cat /etc/resolv.conf                                                                                                             │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	│ ssh     │ -p bridge-791161 sudo crictl pods                                                                                                                      │ bridge-791161          │ jenkins │ v1.37.0 │ 23 Nov 25 10:16 UTC │ 23 Nov 25 10:16 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:16:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:16:09.384488  356138 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:09.384651  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384664  356138 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:09.384670  356138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:09.384941  356138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:16:09.385666  356138 out.go:368] Setting JSON to false
	I1123 10:16:09.387494  356138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10710,"bootTime":1763882259,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:16:09.387583  356138 start.go:143] virtualization: kvm guest
	I1123 10:16:09.389675  356138 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:16:09.391215  356138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:16:09.391256  356138 notify.go:221] Checking for updates...
	I1123 10:16:09.393259  356138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:16:09.394603  356138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:09.395803  356138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:16:09.397054  356138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:16:09.398810  356138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:16:09.400667  356138 config.go:182] Loaded profile config "bridge-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400825  356138 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:09.400980  356138 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:09.401117  356138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:16:09.431550  356138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:16:09.431721  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.501610  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.486961066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.501769  356138 docker.go:319] overlay module found
	I1123 10:16:09.503502  356138 out.go:179] * Using the docker driver based on user configuration
	I1123 10:16:08.932406  341630 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:08.932428  341630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:08.932485  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.962254  341630 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:08.962286  341630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:08.962357  341630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-791161
	I1123 10:16:08.969489  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:08.986744  341630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/bridge-791161/id_rsa Username:docker}
	I1123 10:16:09.003812  341630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:09.056864  341630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:09.090911  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:09.108517  341630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:09.226531  341630 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:09.228833  341630 node_ready.go:35] waiting up to 15m0s for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245324  341630 node_ready.go:49] node "bridge-791161" is "Ready"
	I1123 10:16:09.245361  341630 node_ready.go:38] duration metric: took 16.394308ms for node "bridge-791161" to be "Ready" ...
	I1123 10:16:09.245379  341630 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:09.245433  341630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:09.502654  341630 api_server.go:72] duration metric: took 602.591604ms to wait for apiserver process to appear ...
	I1123 10:16:09.502681  341630 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:09.502706  341630 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 10:16:09.509263  341630 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 10:16:09.510155  341630 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:09.504848  356138 start.go:309] selected driver: docker
	I1123 10:16:09.504864  356138 start.go:927] validating driver "docker" against <nil>
	I1123 10:16:09.504878  356138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:16:09.505666  356138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:16:09.570314  356138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-23 10:16:09.560155745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:16:09.570532  356138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:16:09.570826  356138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:09.572359  356138 out.go:179] * Using Docker driver with root privileges
	I1123 10:16:09.573651  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:09.573735  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:09.573748  356138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:16:09.573829  356138 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:09.575056  356138 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:16:09.576077  356138 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:16:09.577197  356138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:16:09.578314  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:09.578350  356138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:16:09.578363  356138 cache.go:65] Caching tarball of preloaded images
	I1123 10:16:09.578405  356138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:16:09.578475  356138 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:16:09.578490  356138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:16:09.578607  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:09.578632  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json: {Name:mk1fd6c8c1b8c2c18e5b4ea57dc46155bd997340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:09.603731  356138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:16:09.603757  356138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:16:09.603773  356138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:16:09.603816  356138 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:16:09.603920  356138 start.go:364] duration metric: took 78.804µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:16:09.603953  356138 start.go:93] Provisioning new machine with config: &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:09.604048  356138 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:16:09.510617  341630 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:09.510639  341630 api_server.go:131] duration metric: took 7.9515ms to wait for apiserver health ...
	I1123 10:16:09.510646  341630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:09.511774  341630 addons.go:530] duration metric: took 611.647616ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:09.513306  341630 system_pods.go:59] 6 kube-system pods found
	I1123 10:16:09.513342  341630 system_pods.go:61] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.513353  341630 system_pods.go:61] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.513367  341630 system_pods.go:61] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.513379  341630 system_pods.go:61] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.513388  341630 system_pods.go:61] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.513400  341630 system_pods.go:61] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.513408  341630 system_pods.go:74] duration metric: took 2.755326ms to wait for pod list to return data ...
	I1123 10:16:09.513421  341630 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:09.515529  341630 default_sa.go:45] found service account: "default"
	I1123 10:16:09.515550  341630 default_sa.go:55] duration metric: took 2.122813ms for default service account to be created ...
	I1123 10:16:09.515559  341630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:09.517664  341630 system_pods.go:86] 6 kube-system pods found
	I1123 10:16:09.517695  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.517709  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.517719  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.517731  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.517738  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.517746  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.517783  341630 retry.go:31] will retry after 269.045888ms: missing components: kube-dns, kube-proxy
	I1123 10:16:09.732517  341630 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-791161" context rescaled to 1 replicas
	I1123 10:16:09.792357  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:09.792401  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792413  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:09.792424  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:09.792436  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:09.792446  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:09.792463  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:16:09.792475  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:16:09.792483  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:09.792509  341630 retry.go:31] will retry after 270.754186ms: missing components: kube-dns, kube-proxy
	I1123 10:16:10.068331  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.068370  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068381  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.068391  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.068400  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.068409  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.068430  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.068443  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.068450  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:10.068477  341630 retry.go:31] will retry after 429.754148ms: missing components: kube-dns
	I1123 10:16:10.503386  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.503419  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503426  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.503433  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.503438  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.503444  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.503448  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.503451  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.503454  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.503470  341630 retry.go:31] will retry after 408.73206ms: missing components: kube-dns
	I1123 10:16:10.917355  341630 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:10.917398  341630 system_pods.go:89] "coredns-66bc5c9577-5jbpl" [d4bd48f5-9fde-4a68-b96b-a0c62824cadc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917410  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:10.917420  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:16:10.917429  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:10.917451  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:10.917465  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:10.917474  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:10.917478  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:10.917500  341630 retry.go:31] will retry after 552.289133ms: missing components: kube-dns
	I1123 10:16:09.278883  344952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:09.372128  344952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:09.619893  344952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:10.283551  344952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:10.867997  344952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:10.868330  344952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:10.989337  344952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:10.989485  344952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-541522] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:16:11.169439  344952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:11.400232  344952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:11.647348  344952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:11.647533  344952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:11.771440  344952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:12.267757  344952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:12.654977  344952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:12.947814  344952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:13.078046  344952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:13.078626  344952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:13.136374  344952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:08.666124  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.166689  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:09.666832  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.166752  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:10.666681  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.165984  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:11.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.166196  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:12.666342  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.166030  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:13.195964  344952 out.go:252]   - Booting up control plane ...
	I1123 10:16:13.196155  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:13.196274  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:13.196362  344952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:13.196492  344952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:13.196611  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:13.196738  344952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:13.197029  344952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:13.197260  344952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:13.266865  344952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:13.267069  344952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:09.606473  356138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:16:09.606832  356138 start.go:159] libmachine.API.Create for "embed-certs-412306" (driver="docker")
	I1123 10:16:09.606885  356138 client.go:173] LocalClient.Create starting
	I1123 10:16:09.607022  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:16:09.607067  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607113  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607181  356138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:16:09.607208  356138 main.go:143] libmachine: Decoding PEM data...
	I1123 10:16:09.607233  356138 main.go:143] libmachine: Parsing certificate...
	I1123 10:16:09.607683  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:16:09.629449  356138 cli_runner.go:211] docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:16:09.629532  356138 network_create.go:284] running [docker network inspect embed-certs-412306] to gather additional debugging logs...
	I1123 10:16:09.629558  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306
	W1123 10:16:09.649505  356138 cli_runner.go:211] docker network inspect embed-certs-412306 returned with exit code 1
	I1123 10:16:09.649534  356138 network_create.go:287] error running [docker network inspect embed-certs-412306]: docker network inspect embed-certs-412306: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-412306 not found
	I1123 10:16:09.649551  356138 network_create.go:289] output of [docker network inspect embed-certs-412306]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-412306 not found
	
	** /stderr **
	I1123 10:16:09.649693  356138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:09.668995  356138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:16:09.669799  356138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:16:09.670740  356138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:16:09.671473  356138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-052388d40ecf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:97:1c:bc:d1:b9} reservation:<nil>}
	I1123 10:16:09.672185  356138 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0caff4f103e2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:ae:32:4b:cf:65} reservation:<nil>}
	I1123 10:16:09.676786  356138 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d5fec0}
	I1123 10:16:09.676832  356138 network_create.go:124] attempt to create docker network embed-certs-412306 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 10:16:09.676908  356138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-412306 embed-certs-412306
	I1123 10:16:09.737193  356138 network_create.go:108] docker network embed-certs-412306 192.168.94.0/24 created
	I1123 10:16:09.737241  356138 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-412306" container
	I1123 10:16:09.737307  356138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:16:09.758160  356138 cli_runner.go:164] Run: docker volume create embed-certs-412306 --label name.minikube.sigs.k8s.io=embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:16:09.779650  356138 oci.go:103] Successfully created a docker volume embed-certs-412306
	I1123 10:16:09.779742  356138 cli_runner.go:164] Run: docker run --rm --name embed-certs-412306-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --entrypoint /usr/bin/test -v embed-certs-412306:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:16:10.255390  356138 oci.go:107] Successfully prepared a docker volume embed-certs-412306
	I1123 10:16:10.255455  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:10.255469  356138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:16:10.255530  356138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:16:11.474871  341630 system_pods.go:86] 7 kube-system pods found
	I1123 10:16:11.474914  341630 system_pods.go:89] "coredns-66bc5c9577-p6sw2" [7a660efc-5dc7-4014-994c-64d53264718d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:11.474924  341630 system_pods.go:89] "etcd-bridge-791161" [0cef3305-4f78-41d8-955b-4dc8e3e1b20b] Running
	I1123 10:16:11.474945  341630 system_pods.go:89] "kube-apiserver-bridge-791161" [c3ee8173-f846-4c28-9542-5db74dd1ca3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:16:11.474955  341630 system_pods.go:89] "kube-controller-manager-bridge-791161" [f67ddef5-f1cd-4d3f-b388-7d44c2a82e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:16:11.474961  341630 system_pods.go:89] "kube-proxy-sn6s2" [ebbef6f3-f2af-4403-bf85-3391bfe8374f] Running
	I1123 10:16:11.474968  341630 system_pods.go:89] "kube-scheduler-bridge-791161" [1b5778a2-5fe1-4a74-9bce-36ef3021458f] Running
	I1123 10:16:11.474973  341630 system_pods.go:89] "storage-provisioner" [450add9d-9942-4b99-b18d-13cf2aac97d6] Running
	I1123 10:16:11.474984  341630 system_pods.go:126] duration metric: took 1.959418216s to wait for k8s-apps to be running ...
	I1123 10:16:11.474994  341630 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:11.475054  341630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:11.489403  341630 system_svc.go:56] duration metric: took 14.399252ms WaitForService to wait for kubelet
	I1123 10:16:11.489444  341630 kubeadm.go:587] duration metric: took 2.58938325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:11.489470  341630 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:11.492755  341630 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:11.492782  341630 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:11.492808  341630 node_conditions.go:105] duration metric: took 3.332237ms to run NodePressure ...
	I1123 10:16:11.492820  341630 start.go:242] waiting for startup goroutines ...
	I1123 10:16:11.492829  341630 start.go:247] waiting for cluster config update ...
	I1123 10:16:11.492840  341630 start.go:256] writing updated cluster config ...
	I1123 10:16:11.493117  341630 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:11.497127  341630 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:11.501040  341630 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:16:13.507081  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:15.507577  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:13.666736  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.166653  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:14.666411  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.166345  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:15.665938  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.166765  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:16.666304  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.166588  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:17.665914  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.166076  344706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:18.250824  344706 kubeadm.go:1114] duration metric: took 12.162789359s to wait for elevateKubeSystemPrivileges
	I1123 10:16:18.250873  344706 kubeadm.go:403] duration metric: took 24.23117455s to StartCluster
	I1123 10:16:18.250896  344706 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.250984  344706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:18.252313  344706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.252591  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:18.252586  344706 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:18.252625  344706 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:18.252726  344706 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252748  344706 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-990757"
	I1123 10:16:18.252763  344706 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-990757"
	I1123 10:16:18.252783  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.252788  344706 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-990757"
	I1123 10:16:18.252794  344706 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:16:18.253185  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.253439  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.256225  344706 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:18.257663  344706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.278672  344706 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-990757"
	I1123 10:16:18.278725  344706 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:14.780767  356138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-412306:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.525179702s)
	I1123 10:16:14.780809  356138 kic.go:203] duration metric: took 4.525336925s to extract preloaded images to volume ...
	W1123 10:16:14.780917  356138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:16:14.780972  356138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:16:14.781025  356138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:16:14.851187  356138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-412306 --name embed-certs-412306 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-412306 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-412306 --network embed-certs-412306 --ip 192.168.94.2 --volume embed-certs-412306:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:16:15.210434  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Running}}
	I1123 10:16:15.236308  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.262410  356138 cli_runner.go:164] Run: docker exec embed-certs-412306 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:16:15.312245  356138 oci.go:144] the created container "embed-certs-412306" has a running status.
	I1123 10:16:15.312287  356138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa...
	I1123 10:16:15.508167  356138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:16:15.538609  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.568324  356138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:16:15.568357  356138 kic_runner.go:114] Args: [docker exec --privileged embed-certs-412306 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:16:15.633555  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:15.657069  356138 machine.go:94] provisionDockerMachine start ...
	I1123 10:16:15.657228  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.682778  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.683182  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.683211  356138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:16:15.834361  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:15.834394  356138 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:16:15.834460  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:15.855149  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:15.855386  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:15.855408  356138 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:16:16.024669  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:16:16.024755  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.048672  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.048986  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.049013  356138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:16:16.203231  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:16:16.203261  356138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:16:16.203307  356138 ubuntu.go:190] setting up certificates
	I1123 10:16:16.203329  356138 provision.go:84] configureAuth start
	I1123 10:16:16.203397  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.224391  356138 provision.go:143] copyHostCerts
	I1123 10:16:16.224466  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:16:16.224486  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:16:16.224568  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:16:16.224688  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:16:16.224702  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:16:16.224741  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:16:16.224838  356138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:16:16.224850  356138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:16:16.224885  356138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:16:16.224961  356138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:16:16.252659  356138 provision.go:177] copyRemoteCerts
	I1123 10:16:16.252799  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:16:16.252862  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.274900  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.381909  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:16:16.403354  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:16:16.421969  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:16:16.443591  356138 provision.go:87] duration metric: took 240.241648ms to configureAuth
	I1123 10:16:16.443629  356138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:16:16.443817  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:16.443936  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.464697  356138 main.go:143] libmachine: Using SSH client type: native
	I1123 10:16:16.465000  356138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 10:16:16.465026  356138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:16:16.768631  356138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:16:16.768659  356138 machine.go:97] duration metric: took 1.11155421s to provisionDockerMachine
	I1123 10:16:16.768671  356138 client.go:176] duration metric: took 7.161774198s to LocalClient.Create
	I1123 10:16:16.768695  356138 start.go:167] duration metric: took 7.161866501s to libmachine.API.Create "embed-certs-412306"
	I1123 10:16:16.768705  356138 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:16:16.768716  356138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:16:16.768980  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:16:16.769049  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:16.800429  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:16.927787  356138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:16:16.931545  356138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:16:16.931591  356138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:16:16.931614  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:16:16.931671  356138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:16:16.931739  356138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:16:16.931823  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:16:16.939473  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:16.959179  356138 start.go:296] duration metric: took 190.46241ms for postStartSetup
	I1123 10:16:16.959501  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:16.984276  356138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:16:16.984618  356138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:16:16.984693  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.006779  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.112458  356138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:16:17.117106  356138 start.go:128] duration metric: took 7.513028342s to createHost
	I1123 10:16:17.117133  356138 start.go:83] releasing machines lock for "embed-certs-412306", held for 7.513197957s
	I1123 10:16:17.117208  356138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:16:17.134501  356138 ssh_runner.go:195] Run: cat /version.json
	I1123 10:16:17.134547  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.134586  356138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:16:17.134662  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:17.153344  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.153649  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:17.310865  356138 ssh_runner.go:195] Run: systemctl --version
	I1123 10:16:17.317393  356138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:16:17.352355  356138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:16:17.357116  356138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:16:17.357180  356138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:16:17.382356  356138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:16:17.382379  356138 start.go:496] detecting cgroup driver to use...
	I1123 10:16:17.382409  356138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:16:17.382462  356138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:16:17.398562  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:16:17.411069  356138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:16:17.411138  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:16:17.427203  356138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:16:17.444861  356138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:16:17.530800  356138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:16:17.622946  356138 docker.go:234] disabling docker service ...
	I1123 10:16:17.623025  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:16:17.641931  356138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:16:17.654457  356138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:16:17.747652  356138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:16:17.845810  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:16:17.858620  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:16:17.875812  356138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:16:17.875880  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.888305  356138 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:16:17.888379  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.899801  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.911635  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.923072  356138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:16:17.932765  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.945022  356138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.962784  356138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:16:17.974698  356138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:16:17.984798  356138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:16:17.994564  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.110636  356138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:16:18.290560  356138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:16:18.290681  356138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:16:18.297099  356138 start.go:564] Will wait 60s for crictl version
	I1123 10:16:18.297225  356138 ssh_runner.go:195] Run: which crictl
	I1123 10:16:18.304375  356138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:16:18.348465  356138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:16:18.348551  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.389627  356138 ssh_runner.go:195] Run: crio --version
	I1123 10:16:18.430444  356138 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:16:18.278756  344706 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:16:18.279376  344706 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:16:18.279793  344706 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.279857  344706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:18.280007  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.306787  344706 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:18.306810  344706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:18.306871  344706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:16:18.316758  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.336999  344706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:16:18.367903  344706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:18.433504  344706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.466536  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:18.470919  344706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:14.268571  344952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001808065s
	I1123 10:16:14.273043  344952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:14.273189  344952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:16:14.273313  344952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:14.273420  344952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:16.059724  344952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.786566479s
	I1123 10:16:16.921595  344952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.648519148s
	I1123 10:16:18.777367  344952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.504051541s
	I1123 10:16:18.794664  344952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:18.805590  344952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:18.816203  344952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:18.816513  344952 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-541522 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:18.824772  344952 kubeadm.go:319] [bootstrap-token] Using token: mhptlw.q9ng0jhdmffx1zol
	I1123 10:16:18.826026  344952 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:18.826262  344952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:18.830334  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:18.838855  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:18.843285  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:18.845986  344952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:18.848662  344952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:18.647290  344706 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:18.648399  344706 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:18.933557  344706 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:16:18.431580  356138 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:16:18.458210  356138 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:16:18.464771  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.479461  356138 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:16:18.479617  356138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:16:18.479685  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.535015  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.535043  356138 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:16:18.535112  356138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:16:18.576193  356138 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:16:18.576222  356138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:16:18.576333  356138 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:16:18.576476  356138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:16:18.576564  356138 ssh_runner.go:195] Run: crio config
	I1123 10:16:18.633738  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:18.633768  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:18.633790  356138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:16:18.633824  356138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:16:18.633989  356138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:16:18.634064  356138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:16:18.647059  356138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:16:18.647172  356138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:16:18.658381  356138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:16:18.675184  356138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:16:18.696460  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:16:18.712392  356138 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:16:18.717832  356138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:16:18.731391  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:18.841960  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:18.878215  356138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:16:18.878238  356138 certs.go:195] generating shared ca certs ...
	I1123 10:16:18.878258  356138 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:18.878425  356138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:16:18.878475  356138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:16:18.878488  356138 certs.go:257] generating profile certs ...
	I1123 10:16:18.878556  356138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:16:18.878580  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt with IP's: []
	I1123 10:16:19.147317  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt ...
	I1123 10:16:19.147348  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.crt: {Name:mkbf59c08f4785d244500114d39649c207c90bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147525  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key ...
	I1123 10:16:19.147545  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key: {Name:mkb75245d2cacd41a4a207ee2cc5a25d4ea8629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.147671  356138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:16:19.147694  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1123 10:16:19.174958  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 ...
	I1123 10:16:19.174991  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37: {Name:mk680cab74fc85275258d54871c4d313a4cfa6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175171  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 ...
	I1123 10:16:19.175191  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37: {Name:mk076b1fd9788864d5fa8bfdccf76cb7bad2f09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.175299  356138 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt
	I1123 10:16:19.175403  356138 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key
	I1123 10:16:19.175476  356138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:16:19.175494  356138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt with IP's: []
	I1123 10:16:19.340924  356138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt ...
	I1123 10:16:19.340952  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt: {Name:mkd487bb2ca9fa1bc04caff7aa2bcbc384decd7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341151  356138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key ...
	I1123 10:16:19.341173  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key: {Name:mk7c8f5756d2d24a341f272a1597aebf84673b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:19.341385  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:16:19.341439  356138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:16:19.341456  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:16:19.341495  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:16:19.341530  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:16:19.341573  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:16:19.341632  356138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:16:19.342348  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:16:19.363830  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:16:19.385303  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:16:19.406023  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:16:19.433442  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:16:19.463003  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:16:19.482783  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:16:19.500070  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:16:19.520265  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:16:19.541432  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:16:19.559861  356138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:16:19.581528  356138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:16:19.597355  356138 ssh_runner.go:195] Run: openssl version
	I1123 10:16:19.604898  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:16:19.614800  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619006  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.619057  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:16:19.654890  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:16:19.664327  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:16:19.673063  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676814  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.676871  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:16:19.721797  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:16:19.730991  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:16:19.739616  356138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743418  356138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.743475  356138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:16:19.777638  356138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:16:19.787103  356138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:16:19.790766  356138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:16:19.790816  356138 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:16:19.790901  356138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:16:19.790939  356138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:16:19.819126  356138 cri.go:89] found id: ""
	I1123 10:16:19.819202  356138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:16:19.827259  356138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:16:19.835053  356138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:16:19.835138  356138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:16:19.842912  356138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:16:19.842928  356138 kubeadm.go:158] found existing configuration files:
	
	I1123 10:16:19.842967  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:16:19.850209  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:16:19.850251  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:16:19.857884  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:16:19.866646  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:16:19.866697  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:16:19.874327  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.881762  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:16:19.881807  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:16:19.889164  356138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:16:19.896714  356138 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:16:19.896758  356138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:16:19.904290  356138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:16:19.943603  356138 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:16:19.943708  356138 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:16:19.965048  356138 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:16:19.965154  356138 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:16:19.965246  356138 kubeadm.go:319] OS: Linux
	I1123 10:16:19.965327  356138 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:16:19.965405  356138 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:16:19.965481  356138 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:16:19.965573  356138 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:16:19.965644  356138 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:16:19.965732  356138 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:16:19.965823  356138 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:16:19.965891  356138 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:16:20.026266  356138 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:16:20.026438  356138 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:16:20.026607  356138 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:16:20.033615  356138 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:16:19.189076  344952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:19.601794  344952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:20.183417  344952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:20.185182  344952 kubeadm.go:319] 
	I1123 10:16:20.185298  344952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:20.185319  344952 kubeadm.go:319] 
	I1123 10:16:20.185397  344952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:20.185409  344952 kubeadm.go:319] 
	I1123 10:16:20.185430  344952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:20.185517  344952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:20.185598  344952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:20.185607  344952 kubeadm.go:319] 
	I1123 10:16:20.185682  344952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:20.185690  344952 kubeadm.go:319] 
	I1123 10:16:20.185750  344952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:20.185764  344952 kubeadm.go:319] 
	I1123 10:16:20.185817  344952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:20.185945  344952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:20.186023  344952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:20.186032  344952 kubeadm.go:319] 
	I1123 10:16:20.186178  344952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:20.186301  344952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:20.186313  344952 kubeadm.go:319] 
	I1123 10:16:20.186423  344952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.186578  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:20.186625  344952 kubeadm.go:319] 	--control-plane 
	I1123 10:16:20.186634  344952 kubeadm.go:319] 
	I1123 10:16:20.186761  344952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:20.186780  344952 kubeadm.go:319] 
	I1123 10:16:20.186885  344952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mhptlw.q9ng0jhdmffx1zol \
	I1123 10:16:20.187030  344952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:20.189698  344952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:20.189890  344952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:20.189921  344952 cni.go:84] Creating CNI manager for ""
	I1123 10:16:20.189943  344952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:20.192370  344952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:18.007511  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:20.508070  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:18.934624  344706 addons.go:530] duration metric: took 681.995047ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:19.151704  344706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-990757" context rescaled to 1 replicas
	W1123 10:16:20.652483  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:23.151550  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:20.035950  356138 out.go:252]   - Generating certificates and keys ...
	I1123 10:16:20.036023  356138 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:16:20.036138  356138 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:16:20.199227  356138 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:16:20.296867  356138 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:16:20.649116  356138 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:16:20.853583  356138 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:16:21.223354  356138 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:16:21.223524  356138 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.589454  356138 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:16:21.589601  356138 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-412306 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 10:16:21.712733  356138 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:16:22.231370  356138 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:16:22.493251  356138 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:16:22.493387  356138 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:16:22.795558  356138 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:16:22.972083  356138 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:16:23.034642  356138 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:16:23.345102  356138 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:16:23.769569  356138 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:16:23.770179  356138 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:16:23.773491  356138 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:16:20.193529  344952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:20.198365  344952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:20.198385  344952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:20.211881  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:20.437045  344952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:20.437128  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.437165  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-541522 minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-541522 minikube.k8s.io/primary=true
	I1123 10:16:20.561626  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:20.561779  344952 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:21.061993  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:21.561692  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.061999  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:22.561862  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.062326  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.561744  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:23.775519  356138 out.go:252]   - Booting up control plane ...
	I1123 10:16:23.775641  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:16:23.775760  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:16:23.775870  356138 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:16:23.790389  356138 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:16:23.790543  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:16:23.797027  356138 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:16:23.797353  356138 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:16:23.797453  356138 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:16:23.917379  356138 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:16:23.917528  356138 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:16:24.062736  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.562369  344952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:24.632270  344952 kubeadm.go:1114] duration metric: took 4.195217058s to wait for elevateKubeSystemPrivileges
	I1123 10:16:24.632308  344952 kubeadm.go:403] duration metric: took 16.142295896s to StartCluster
	I1123 10:16:24.632326  344952 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.632400  344952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:24.633884  344952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:24.634150  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:24.634179  344952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:24.634251  344952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:24.634355  344952 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:16:24.634368  344952 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:16:24.634377  344952 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	I1123 10:16:24.634388  344952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	I1123 10:16:24.634410  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.634455  344952 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:24.634764  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.634912  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.635539  344952 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:24.636521  344952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:24.657418  344952 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	I1123 10:16:24.657470  344952 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:16:24.657938  344952 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:16:24.658491  344952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:24.659646  344952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.659666  344952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:24.659724  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.685525  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.690195  344952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.690219  344952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:24.690298  344952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:16:24.724298  344952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:16:24.750701  344952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:24.796123  344952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:24.848328  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:24.848334  344952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:24.923983  344952 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:24.925356  344952 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:25.228703  344952 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 10:16:23.006965  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.008124  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:25.154186  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:27.651716  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:25.229824  344952 addons.go:530] duration metric: took 595.565525ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:16:25.428798  344952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-541522" context rescaled to 1 replicas
	W1123 10:16:26.929589  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:24.918996  356138 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001753375s
	I1123 10:16:24.925621  356138 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:16:24.925735  356138 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1123 10:16:24.925858  356138 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:16:24.925971  356138 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:16:26.512191  356138 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.587992193s
	I1123 10:16:27.081491  356138 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.157460492s
	I1123 10:16:28.925636  356138 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001590433s
	I1123 10:16:28.937425  356138 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:16:28.947025  356138 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:16:28.955505  356138 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:16:28.955787  356138 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-412306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:16:28.963030  356138 kubeadm.go:319] [bootstrap-token] Using token: 2diej7.g3irisej2sfcnkox
	I1123 10:16:28.965317  356138 out.go:252]   - Configuring RBAC rules ...
	I1123 10:16:28.965442  356138 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:16:28.968022  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:16:28.973224  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:16:28.975951  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:16:28.978262  356138 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:16:28.981645  356138 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:16:29.331628  356138 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:16:29.745711  356138 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:16:30.331119  356138 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:16:30.331918  356138 kubeadm.go:319] 
	I1123 10:16:30.332036  356138 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:16:30.332056  356138 kubeadm.go:319] 
	I1123 10:16:30.332201  356138 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:16:30.332221  356138 kubeadm.go:319] 
	I1123 10:16:30.332275  356138 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:16:30.332347  356138 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:16:30.332408  356138 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:16:30.332416  356138 kubeadm.go:319] 
	I1123 10:16:30.332478  356138 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:16:30.332486  356138 kubeadm.go:319] 
	I1123 10:16:30.332540  356138 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:16:30.332548  356138 kubeadm.go:319] 
	I1123 10:16:30.332612  356138 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:16:30.332708  356138 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:16:30.332818  356138 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:16:30.332837  356138 kubeadm.go:319] 
	I1123 10:16:30.332958  356138 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:16:30.333060  356138 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:16:30.333076  356138 kubeadm.go:319] 
	I1123 10:16:30.333211  356138 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333342  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:16:30.333366  356138 kubeadm.go:319] 	--control-plane 
	I1123 10:16:30.333375  356138 kubeadm.go:319] 
	I1123 10:16:30.333446  356138 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:16:30.333451  356138 kubeadm.go:319] 
	I1123 10:16:30.333535  356138 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2diej7.g3irisej2sfcnkox \
	I1123 10:16:30.333651  356138 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:16:30.336224  356138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:16:30.336339  356138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:16:30.336389  356138 cni.go:84] Creating CNI manager for ""
	I1123 10:16:30.336405  356138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:16:30.401160  356138 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:16:27.506801  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.507199  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:29.651902  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	W1123 10:16:32.152208  344706 node_ready.go:57] node "old-k8s-version-990757" has "Ready":"False" status (will retry)
	I1123 10:16:32.651044  344706 node_ready.go:49] node "old-k8s-version-990757" is "Ready"
	I1123 10:16:32.651072  344706 node_ready.go:38] duration metric: took 14.002600443s for node "old-k8s-version-990757" to be "Ready" ...
	I1123 10:16:32.651103  344706 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:32.651154  344706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:32.664668  344706 api_server.go:72] duration metric: took 14.412040415s to wait for apiserver process to appear ...
	I1123 10:16:32.664699  344706 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:32.664734  344706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:16:32.671045  344706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:16:32.672175  344706 api_server.go:141] control plane version: v1.28.0
	I1123 10:16:32.672198  344706 api_server.go:131] duration metric: took 7.493612ms to wait for apiserver health ...
	I1123 10:16:32.672206  344706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:32.675396  344706 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:32.675423  344706 system_pods.go:61] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.675429  344706 system_pods.go:61] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.675438  344706 system_pods.go:61] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.675442  344706 system_pods.go:61] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.675446  344706 system_pods.go:61] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.675455  344706 system_pods.go:61] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.675461  344706 system_pods.go:61] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.675466  344706 system_pods.go:61] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.675474  344706 system_pods.go:74] duration metric: took 3.26216ms to wait for pod list to return data ...
	I1123 10:16:32.675483  344706 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:32.677500  344706 default_sa.go:45] found service account: "default"
	I1123 10:16:32.677517  344706 default_sa.go:55] duration metric: took 2.029784ms for default service account to be created ...
	I1123 10:16:32.677525  344706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:32.680674  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.680700  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.680707  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.680719  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.680730  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.680736  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.680745  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.680751  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.680760  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.680799  344706 retry.go:31] will retry after 291.35829ms: missing components: kube-dns
	I1123 10:16:32.977121  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:32.977154  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:32.977161  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:32.977168  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:32.977172  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:32.977176  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:32.977188  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:32.977195  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:32.977199  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:32.977215  344706 retry.go:31] will retry after 325.371921ms: missing components: kube-dns
	I1123 10:16:33.307183  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.307222  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:33.307228  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.307234  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.307237  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.307241  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.307244  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.307253  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.307257  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:33.307274  344706 retry.go:31] will retry after 477.295588ms: missing components: kube-dns
	W1123 10:16:29.428459  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:31.428879  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:33.429049  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:30.402276  356138 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:16:30.407016  356138 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:16:30.407034  356138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:16:30.424045  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:16:30.638241  356138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:16:30.638352  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:30.638388  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-412306 minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-412306 minikube.k8s.io/primary=true
	I1123 10:16:30.648402  356138 ops.go:34] apiserver oom_adj: -16
	I1123 10:16:30.709488  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.210134  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:31.710498  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.209893  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:32.709530  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.209575  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:33.709563  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.210241  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:34.709746  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.210264  356138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:16:35.283600  356138 kubeadm.go:1114] duration metric: took 4.64531381s to wait for elevateKubeSystemPrivileges
	I1123 10:16:35.283643  356138 kubeadm.go:403] duration metric: took 15.49282887s to StartCluster
	I1123 10:16:35.283665  356138 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.283762  356138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:16:35.285869  356138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:16:35.286180  356138 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:16:35.286331  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:16:35.286610  356138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:35.286435  356138 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:16:35.286707  356138 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:16:35.286812  356138 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	I1123 10:16:35.286885  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.286746  356138 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:16:35.287011  356138 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:16:35.287600  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.287780  356138 out.go:179] * Verifying Kubernetes components...
	I1123 10:16:35.288910  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.289524  356138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:16:35.314640  356138 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	I1123 10:16:35.314789  356138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:16:35.315364  356138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:16:35.316039  356138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:16:33.788957  344706 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:33.788988  344706 system_pods.go:89] "coredns-5dd5756b68-fsbfv" [d381637c-3686-4e19-95eb-489a0328363d] Running
	I1123 10:16:33.788994  344706 system_pods.go:89] "etcd-old-k8s-version-990757" [9544c436-c89f-4d93-961e-c3d059a7e093] Running
	I1123 10:16:33.788997  344706 system_pods.go:89] "kindnet-nz2m9" [2de3e7ea-96dc-4120-8500-245759aaacda] Running
	I1123 10:16:33.789001  344706 system_pods.go:89] "kube-apiserver-old-k8s-version-990757" [ad563081-657a-4c35-8404-696aa7aa0e9c] Running
	I1123 10:16:33.789006  344706 system_pods.go:89] "kube-controller-manager-old-k8s-version-990757" [71f2226e-4030-45a3-a5dc-1f58332c62d8] Running
	I1123 10:16:33.789009  344706 system_pods.go:89] "kube-proxy-99g4b" [d727ffbe-b078-4abf-a715-fc9811920e00] Running
	I1123 10:16:33.789013  344706 system_pods.go:89] "kube-scheduler-old-k8s-version-990757" [6d10eeed-2aa8-44d8-9800-7b8a0992f902] Running
	I1123 10:16:33.789017  344706 system_pods.go:89] "storage-provisioner" [b9036b3a-e19e-439b-9584-93d805cb21ea] Running
	I1123 10:16:33.789025  344706 system_pods.go:126] duration metric: took 1.111493702s to wait for k8s-apps to be running ...
	I1123 10:16:33.789036  344706 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:33.789083  344706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:33.801872  344706 system_svc.go:56] duration metric: took 12.824145ms WaitForService to wait for kubelet
	I1123 10:16:33.801901  344706 kubeadm.go:587] duration metric: took 15.549282124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:33.801917  344706 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:33.804486  344706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:33.804512  344706 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:33.804532  344706 node_conditions.go:105] duration metric: took 2.608231ms to run NodePressure ...
	I1123 10:16:33.804549  344706 start.go:242] waiting for startup goroutines ...
	I1123 10:16:33.804563  344706 start.go:247] waiting for cluster config update ...
	I1123 10:16:33.804579  344706 start.go:256] writing updated cluster config ...
	I1123 10:16:33.804859  344706 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:33.808438  344706 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:33.812221  344706 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.816745  344706 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:16:33.816770  344706 pod_ready.go:86] duration metric: took 4.52627ms for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.819363  344706 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.823014  344706 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.823034  344706 pod_ready.go:86] duration metric: took 3.64929ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.825305  344706 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.830141  344706 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:16:33.830162  344706 pod_ready.go:86] duration metric: took 4.841585ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:33.832571  344706 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.213051  344706 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:16:34.213110  344706 pod_ready.go:86] duration metric: took 380.4924ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.413069  344706 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:34.813198  344706 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:16:34.813228  344706 pod_ready.go:86] duration metric: took 400.102635ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.012747  344706 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412818  344706 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:16:35.412845  344706 pod_ready.go:86] duration metric: took 400.068338ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:35.412857  344706 pod_ready.go:40] duration metric: took 1.604388715s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:35.469188  344706 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:16:35.510336  344706 out.go:203] 
	W1123 10:16:35.512291  344706 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:16:35.513439  344706 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:16:35.514923  344706 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	I1123 10:16:35.317954  356138 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.317987  356138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:16:35.318441  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.340962  356138 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.340989  356138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:16:35.341107  356138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:16:35.347702  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.369097  356138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:16:35.375674  356138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:16:35.442865  356138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:16:35.465653  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:16:35.487123  356138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:16:35.561205  356138 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1123 10:16:35.562463  356138 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:16:35.788632  356138 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:16:32.005830  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:34.006310  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:36.007382  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:35.430057  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	W1123 10:16:37.929223  344952 node_ready.go:57] node "no-preload-541522" has "Ready":"False" status (will retry)
	I1123 10:16:35.789494  356138 addons.go:530] duration metric: took 503.064926ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:16:36.066022  356138 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-412306" context rescaled to 1 replicas
	W1123 10:16:37.565650  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:38.507551  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	W1123 10:16:41.006771  341630 pod_ready.go:104] pod "coredns-66bc5c9577-p6sw2" is not "Ready", error: <nil>
	I1123 10:16:38.928775  344952 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:16:38.928809  344952 node_ready.go:38] duration metric: took 14.003414343s for node "no-preload-541522" to be "Ready" ...
	I1123 10:16:38.928827  344952 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:38.928893  344952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:38.941967  344952 api_server.go:72] duration metric: took 14.30774812s to wait for apiserver process to appear ...
	I1123 10:16:38.941992  344952 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:38.942007  344952 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:16:38.946871  344952 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:16:38.947779  344952 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:38.947803  344952 api_server.go:131] duration metric: took 5.806056ms to wait for apiserver health ...
	I1123 10:16:38.947811  344952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:38.951278  344952 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:38.951306  344952 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.951313  344952 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.951318  344952 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.951322  344952 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.951328  344952 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.951333  344952 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.951337  344952 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.951341  344952 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.951347  344952 system_pods.go:74] duration metric: took 3.530661ms to wait for pod list to return data ...
	I1123 10:16:38.951356  344952 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:38.953395  344952 default_sa.go:45] found service account: "default"
	I1123 10:16:38.953416  344952 default_sa.go:55] duration metric: took 2.05549ms for default service account to be created ...
	I1123 10:16:38.953424  344952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:38.955705  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:38.955729  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:38.955735  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:38.955743  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:38.955749  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:38.955755  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:38.955766  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:38.955774  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:38.955785  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:38.955807  344952 retry.go:31] will retry after 286.541435ms: missing components: kube-dns
	I1123 10:16:39.246793  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.246834  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:39.246842  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.246850  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.246855  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.246861  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.246866  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.246876  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.246889  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:39.246907  344952 retry.go:31] will retry after 342.610222ms: missing components: kube-dns
	I1123 10:16:39.594146  344952 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:39.594183  344952 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running
	I1123 10:16:39.594196  344952 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running
	I1123 10:16:39.594200  344952 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running
	I1123 10:16:39.594204  344952 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running
	I1123 10:16:39.594210  344952 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running
	I1123 10:16:39.594215  344952 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running
	I1123 10:16:39.594220  344952 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running
	I1123 10:16:39.594226  344952 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running
	I1123 10:16:39.594236  344952 system_pods.go:126] duration metric: took 640.805319ms to wait for k8s-apps to be running ...
	I1123 10:16:39.594250  344952 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:39.594310  344952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:39.608983  344952 system_svc.go:56] duration metric: took 14.722696ms WaitForService to wait for kubelet
	I1123 10:16:39.609015  344952 kubeadm.go:587] duration metric: took 14.97480089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:39.609037  344952 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:39.611842  344952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:39.611865  344952 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:39.611882  344952 node_conditions.go:105] duration metric: took 2.839945ms to run NodePressure ...
	I1123 10:16:39.611895  344952 start.go:242] waiting for startup goroutines ...
	I1123 10:16:39.611908  344952 start.go:247] waiting for cluster config update ...
	I1123 10:16:39.611919  344952 start.go:256] writing updated cluster config ...
	I1123 10:16:39.612185  344952 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:39.616031  344952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:39.619510  344952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.623392  344952 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:16:39.623415  344952 pod_ready.go:86] duration metric: took 3.869312ms for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.625265  344952 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.628641  344952 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:16:39.628659  344952 pod_ready.go:86] duration metric: took 3.374871ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.630356  344952 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.633564  344952 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:16:39.633587  344952 pod_ready.go:86] duration metric: took 3.21019ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:39.635340  344952 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.020259  344952 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:16:40.020290  344952 pod_ready.go:86] duration metric: took 384.929039ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.220795  344952 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.620970  344952 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:16:40.621002  344952 pod_ready.go:86] duration metric: took 400.183007ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:40.819960  344952 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219866  344952 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:16:41.219893  344952 pod_ready.go:86] duration metric: took 399.908601ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:41.219905  344952 pod_ready.go:40] duration metric: took 1.603850974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:41.264158  344952 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:41.265945  344952 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	I1123 10:16:42.506018  341630 pod_ready.go:94] pod "coredns-66bc5c9577-p6sw2" is "Ready"
	I1123 10:16:42.506054  341630 pod_ready.go:86] duration metric: took 31.004987147s for pod "coredns-66bc5c9577-p6sw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.508459  341630 pod_ready.go:83] waiting for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.514192  341630 pod_ready.go:94] pod "etcd-bridge-791161" is "Ready"
	I1123 10:16:42.514218  341630 pod_ready.go:86] duration metric: took 5.738216ms for pod "etcd-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.516115  341630 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.519705  341630 pod_ready.go:94] pod "kube-apiserver-bridge-791161" is "Ready"
	I1123 10:16:42.519724  341630 pod_ready.go:86] duration metric: took 3.591711ms for pod "kube-apiserver-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.521450  341630 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.704830  341630 pod_ready.go:94] pod "kube-controller-manager-bridge-791161" is "Ready"
	I1123 10:16:42.704859  341630 pod_ready.go:86] duration metric: took 183.390224ms for pod "kube-controller-manager-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:42.905328  341630 pod_ready.go:83] waiting for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.304355  341630 pod_ready.go:94] pod "kube-proxy-sn6s2" is "Ready"
	I1123 10:16:43.304382  341630 pod_ready.go:86] duration metric: took 399.024239ms for pod "kube-proxy-sn6s2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.504607  341630 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905001  341630 pod_ready.go:94] pod "kube-scheduler-bridge-791161" is "Ready"
	I1123 10:16:43.905030  341630 pod_ready.go:86] duration metric: took 400.39674ms for pod "kube-scheduler-bridge-791161" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:43.905043  341630 pod_ready.go:40] duration metric: took 32.407876329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:43.960235  341630 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:43.961459  341630 out.go:179] * Done! kubectl is now configured to use "bridge-791161" cluster and "default" namespace by default
	W1123 10:16:40.065837  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:42.565358  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	W1123 10:16:45.068207  356138 node_ready.go:57] node "embed-certs-412306" has "Ready":"False" status (will retry)
	I1123 10:16:46.568628  356138 node_ready.go:49] node "embed-certs-412306" is "Ready"
	I1123 10:16:46.568656  356138 node_ready.go:38] duration metric: took 11.006153698s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:16:46.568672  356138 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:16:46.568716  356138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:16:46.582933  356138 api_server.go:72] duration metric: took 11.296710961s to wait for apiserver process to appear ...
	I1123 10:16:46.582964  356138 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:16:46.582989  356138 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:16:46.588509  356138 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 10:16:46.589515  356138 api_server.go:141] control plane version: v1.34.1
	I1123 10:16:46.589535  356138 api_server.go:131] duration metric: took 6.56399ms to wait for apiserver health ...
	I1123 10:16:46.589544  356138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:16:46.592533  356138 system_pods.go:59] 8 kube-system pods found
	I1123 10:16:46.592562  356138 system_pods.go:61] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.592569  356138 system_pods.go:61] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.592578  356138 system_pods.go:61] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.592587  356138 system_pods.go:61] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.592593  356138 system_pods.go:61] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.592602  356138 system_pods.go:61] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.592607  356138 system_pods.go:61] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.592620  356138 system_pods.go:61] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.592631  356138 system_pods.go:74] duration metric: took 3.080482ms to wait for pod list to return data ...
	I1123 10:16:46.592641  356138 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:16:46.595192  356138 default_sa.go:45] found service account: "default"
	I1123 10:16:46.595213  356138 default_sa.go:55] duration metric: took 2.563019ms for default service account to be created ...
	I1123 10:16:46.595223  356138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:16:46.597828  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:46.597856  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.597863  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.597870  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.597876  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.597887  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.597892  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.597898  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.597905  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.597942  356138 retry.go:31] will retry after 236.958803ms: missing components: kube-dns
	I1123 10:16:46.840195  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:46.840241  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:46.840254  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:46.840283  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:46.840293  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:46.840304  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:46.840309  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:46.840317  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:46.840326  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:46.840352  356138 retry.go:31] will retry after 288.634662ms: missing components: kube-dns
	I1123 10:16:47.133783  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:47.133825  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:47.133834  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:47.133844  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:47.133850  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:47.133855  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:47.133861  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:47.133866  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:47.133874  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:47.133895  356138 retry.go:31] will retry after 329.106738ms: missing components: kube-dns
	I1123 10:16:47.467403  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:47.467456  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:16:47.467465  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:47.467474  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:47.467480  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:47.467486  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:47.467498  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:47.467504  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:47.467516  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:16:47.467545  356138 retry.go:31] will retry after 556.171915ms: missing components: kube-dns
	I1123 10:16:48.028184  356138 system_pods.go:86] 8 kube-system pods found
	I1123 10:16:48.028230  356138 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running
	I1123 10:16:48.028239  356138 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running
	I1123 10:16:48.028244  356138 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:16:48.028248  356138 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running
	I1123 10:16:48.028252  356138 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running
	I1123 10:16:48.028255  356138 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:16:48.028259  356138 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running
	I1123 10:16:48.028262  356138 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:16:48.028270  356138 system_pods.go:126] duration metric: took 1.433040723s to wait for k8s-apps to be running ...
	I1123 10:16:48.028279  356138 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:16:48.028322  356138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:16:48.041305  356138 system_svc.go:56] duration metric: took 13.015993ms WaitForService to wait for kubelet
	I1123 10:16:48.041336  356138 kubeadm.go:587] duration metric: took 12.755118682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:16:48.041361  356138 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:16:48.044390  356138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:16:48.044420  356138 node_conditions.go:123] node cpu capacity is 8
	I1123 10:16:48.044439  356138 node_conditions.go:105] duration metric: took 3.072771ms to run NodePressure ...
	I1123 10:16:48.044457  356138 start.go:242] waiting for startup goroutines ...
	I1123 10:16:48.044471  356138 start.go:247] waiting for cluster config update ...
	I1123 10:16:48.044488  356138 start.go:256] writing updated cluster config ...
	I1123 10:16:48.044772  356138 ssh_runner.go:195] Run: rm -f paused
	I1123 10:16:48.048532  356138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:48.051926  356138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.056287  356138 pod_ready.go:94] pod "coredns-66bc5c9577-fxl7j" is "Ready"
	I1123 10:16:48.056323  356138 pod_ready.go:86] duration metric: took 4.377095ms for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.058178  356138 pod_ready.go:83] waiting for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.061689  356138 pod_ready.go:94] pod "etcd-embed-certs-412306" is "Ready"
	I1123 10:16:48.061711  356138 pod_ready.go:86] duration metric: took 3.514207ms for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.063466  356138 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.067063  356138 pod_ready.go:94] pod "kube-apiserver-embed-certs-412306" is "Ready"
	I1123 10:16:48.067080  356138 pod_ready.go:86] duration metric: took 3.595858ms for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.069048  356138 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.452780  356138 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412306" is "Ready"
	I1123 10:16:48.452805  356138 pod_ready.go:86] duration metric: took 383.73999ms for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:48.653743  356138 pod_ready.go:83] waiting for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.052970  356138 pod_ready.go:94] pod "kube-proxy-2vnjq" is "Ready"
	I1123 10:16:49.052998  356138 pod_ready.go:86] duration metric: took 399.22677ms for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.253502  356138 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.652551  356138 pod_ready.go:94] pod "kube-scheduler-embed-certs-412306" is "Ready"
	I1123 10:16:49.652578  356138 pod_ready.go:86] duration metric: took 399.044168ms for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:16:49.652589  356138 pod_ready.go:40] duration metric: took 1.604029447s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:16:49.695575  356138 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:16:49.697240  356138 out.go:179] * Done! kubectl is now configured to use "embed-certs-412306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:16:46 embed-certs-412306 crio[777]: time="2025-11-23T10:16:46.726344425Z" level=info msg="Starting container: ac4d8a97642e98b353931547de6f2c1a52df6040140380cdfdec0b64db980973" id=a0ebbae2-65b1-40e7-bd62-b254ee36e08f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:46 embed-certs-412306 crio[777]: time="2025-11-23T10:16:46.728439074Z" level=info msg="Started container" PID=1834 containerID=ac4d8a97642e98b353931547de6f2c1a52df6040140380cdfdec0b64db980973 description=kube-system/coredns-66bc5c9577-fxl7j/coredns id=a0ebbae2-65b1-40e7-bd62-b254ee36e08f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8c410e85ab989358016395e2eec229c3bd52f10b569da326c14c69820e40c7d
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.157716091Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a82223dd-4deb-4bb6-a59c-d4f8a2523e92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.15778833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.162237343Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6e495016499577ac6bd185cc9a497cb0862ee9907d85ca8b54714bdb2bce0d49 UID:5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e NetNS:/var/run/netns/b616c904-50a3-48d4-a398-1476c0e30a90 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000452560}] Aliases:map[]}"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.162262313Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.171485058Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6e495016499577ac6bd185cc9a497cb0862ee9907d85ca8b54714bdb2bce0d49 UID:5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e NetNS:/var/run/netns/b616c904-50a3-48d4-a398-1476c0e30a90 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000452560}] Aliases:map[]}"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.171595036Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.17228947Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.173005789Z" level=info msg="Ran pod sandbox 6e495016499577ac6bd185cc9a497cb0862ee9907d85ca8b54714bdb2bce0d49 with infra container: default/busybox/POD" id=a82223dd-4deb-4bb6-a59c-d4f8a2523e92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.174201301Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bcce6148-353b-4ea8-b6f4-b7b1d102d5c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.174323198Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bcce6148-353b-4ea8-b6f4-b7b1d102d5c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.174368952Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bcce6148-353b-4ea8-b6f4-b7b1d102d5c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.175247363Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=97164504-174d-4825-908b-4a4699a8c60a name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:50 embed-certs-412306 crio[777]: time="2025-11-23T10:16:50.176907548Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.123171301Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=97164504-174d-4825-908b-4a4699a8c60a name=/runtime.v1.ImageService/PullImage
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.123896413Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c6d17960-ca11-40df-b10c-ce39f7fb6abd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.125332036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b63e6c9-891c-48e4-8045-097e09e3c5c5 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.130163418Z" level=info msg="Creating container: default/busybox/busybox" id=ece6145e-0681-48ad-bc64-992b6cde024c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.13029598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.134281477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.134656816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.158452437Z" level=info msg="Created container a9b6be60ac47724565349859d8709b6ea54fd90d773232a78ccc3af6100b39b9: default/busybox/busybox" id=ece6145e-0681-48ad-bc64-992b6cde024c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.159027527Z" level=info msg="Starting container: a9b6be60ac47724565349859d8709b6ea54fd90d773232a78ccc3af6100b39b9" id=c78d3efa-6d31-4381-9872-a1ed00404a21 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:16:53 embed-certs-412306 crio[777]: time="2025-11-23T10:16:53.160613315Z" level=info msg="Started container" PID=1910 containerID=a9b6be60ac47724565349859d8709b6ea54fd90d773232a78ccc3af6100b39b9 description=default/busybox/busybox id=c78d3efa-6d31-4381-9872-a1ed00404a21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e495016499577ac6bd185cc9a497cb0862ee9907d85ca8b54714bdb2bce0d49
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a9b6be60ac477       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6e49501649957       busybox                                      default
	ac4d8a97642e9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   c8c410e85ab98       coredns-66bc5c9577-fxl7j                     kube-system
	2f9f73ac34cb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   74a7dfcb90890       storage-provisioner                          kube-system
	9332aee599436       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   773fd8d990e14       kube-proxy-2vnjq                             kube-system
	4e11fc755aab3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   67515c793f2b1       kindnet-sm2h2                                kube-system
	1bd0f91dc3758       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   4b13dafbab1cb       etcd-embed-certs-412306                      kube-system
	12ea3acd34975       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   ab81d29d4aa2a       kube-apiserver-embed-certs-412306            kube-system
	8862063ffb4ba       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   39076787b9437       kube-scheduler-embed-certs-412306            kube-system
	6ec64d7646d01       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   96ba72d7f59b4       kube-controller-manager-embed-certs-412306   kube-system
	
	
	==> coredns [ac4d8a97642e98b353931547de6f2c1a52df6040140380cdfdec0b64db980973] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38500 - 38578 "HINFO IN 2094638484373507245.4448835956279391776. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033661488s
	
	
	==> describe nodes <==
	Name:               embed-certs-412306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-412306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-412306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-412306
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:17:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:17:00 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:17:00 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:17:00 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:17:00 +0000   Sun, 23 Nov 2025 10:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-412306
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f548ff8d-94a1-438a-a9c0-5f1765fa56bb
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-fxl7j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-412306                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-sm2h2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-412306             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-412306    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-2vnjq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-412306             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-412306 event: Registered Node embed-certs-412306 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-412306 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [1bd0f91dc3758dc131f67a0859311579bcb373347e7d45243ea7d109aabf2931] <==
	{"level":"warn","ts":"2025-11-23T10:16:26.220081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.229318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.235474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.242201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.249299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.256266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.262974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.269364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.276068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.292289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.298878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.306002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.312970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.320858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.327824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.335746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.342823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.349846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.357424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.363700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.369875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.383051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.390688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.398820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:16:26.454250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:17:00 up  2:59,  0 user,  load average: 6.43, 5.18, 2.87
	Linux embed-certs-412306 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e11fc755aab35c5b9af8f9c225f0b61708397320b32fdef93d34bdfed719f04] <==
	I1123 10:16:35.881754       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:16:35.882002       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 10:16:35.882166       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:16:35.882188       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:16:35.882211       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:16:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:16:36.176300       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:16:36.176351       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:16:36.176366       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:16:36.176522       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:16:36.477032       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:16:36.477059       1 metrics.go:72] Registering metrics
	I1123 10:16:36.477163       1 controller.go:711] "Syncing nftables rules"
	I1123 10:16:46.176355       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:16:46.176427       1 main.go:301] handling current node
	I1123 10:16:56.178325       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:16:56.178354       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12ea3acd34975b12326835890db6e4c363778dd3b7f6dadb6d0b87ae524003a1] <==
	I1123 10:16:27.132755       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E1123 10:16:27.135972       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:16:27.138572       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:16:27.144594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:16:27.157266       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:27.179995       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:16:27.321136       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:16:27.984787       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:16:27.988662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:16:27.988683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:16:28.422317       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:16:28.455826       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:16:28.603708       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:16:28.610472       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 10:16:28.611372       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:16:28.615207       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:16:29.067897       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:16:29.737191       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:16:29.744903       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:16:29.751535       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:16:34.721034       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:16:34.919661       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:16:35.021502       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:16:35.030253       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 10:16:58.934381       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35678: use of closed network connection
	
	
	==> kube-controller-manager [6ec64d7646d01b1abfbbd4124d9fa2ed243f52f03b5d21559268a0a3a70133cf] <==
	I1123 10:16:34.067161       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:16:34.067183       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:16:34.067252       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:16:34.067259       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-412306"
	I1123 10:16:34.067184       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:16:34.067341       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:16:34.067376       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:16:34.067531       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:16:34.067669       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:16:34.067686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:16:34.067716       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:16:34.067786       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:16:34.068510       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:16:34.070665       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:16:34.071583       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:16:34.071608       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:16:34.071656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:16:34.071716       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:16:34.071726       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:16:34.071734       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:16:34.071725       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:16:34.077196       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-412306" podCIDRs=["10.244.0.0/24"]
	I1123 10:16:34.078119       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:16:34.086635       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:16:49.069354       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9332aee5994363ad3141b8d78cc349cc14838fbc26f83fef92a6c86fa2890310] <==
	I1123 10:16:35.744736       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:16:35.818146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:16:35.919100       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:16:35.919135       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 10:16:35.919225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:16:35.938282       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:16:35.938337       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:16:35.943499       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:16:35.943842       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:16:35.943857       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:16:35.944961       1 config.go:200] "Starting service config controller"
	I1123 10:16:35.944988       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:16:35.944996       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:16:35.945020       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:16:35.945023       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:16:35.945038       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:16:35.945062       1 config.go:309] "Starting node config controller"
	I1123 10:16:35.945067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:16:35.945073       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:16:36.046073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:16:36.046177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:16:36.046247       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8862063ffb4ba77ee496736f032f475a9ff6877cb1fa961e8a27b6267a0dff08] <==
	E1123 10:16:27.078076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:16:27.078156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:16:27.078275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:16:27.078375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:16:27.078495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:16:27.079599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:16:27.080163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:16:27.080262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:16:27.080653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:16:27.080700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:16:27.080888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:16:27.080915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:16:27.081630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:16:27.081792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:16:27.893660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:16:27.954070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:16:27.987504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:16:28.105263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:16:28.105323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:16:28.146793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:16:28.197438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:16:28.219543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:16:28.258840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:16:28.270255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1123 10:16:28.575141       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786254    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10c4fa48-37ca-4164-83ef-7ab034f844a9-lib-modules\") pod \"kube-proxy-2vnjq\" (UID: \"10c4fa48-37ca-4164-83ef-7ab034f844a9\") " pod="kube-system/kube-proxy-2vnjq"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786314    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1af4c3f2-8377-4a64-9499-502b9841a81d-cni-cfg\") pod \"kindnet-sm2h2\" (UID: \"1af4c3f2-8377-4a64-9499-502b9841a81d\") " pod="kube-system/kindnet-sm2h2"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786342    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10c4fa48-37ca-4164-83ef-7ab034f844a9-kube-proxy\") pod \"kube-proxy-2vnjq\" (UID: \"10c4fa48-37ca-4164-83ef-7ab034f844a9\") " pod="kube-system/kube-proxy-2vnjq"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786371    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10c4fa48-37ca-4164-83ef-7ab034f844a9-xtables-lock\") pod \"kube-proxy-2vnjq\" (UID: \"10c4fa48-37ca-4164-83ef-7ab034f844a9\") " pod="kube-system/kube-proxy-2vnjq"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786392    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af4c3f2-8377-4a64-9499-502b9841a81d-lib-modules\") pod \"kindnet-sm2h2\" (UID: \"1af4c3f2-8377-4a64-9499-502b9841a81d\") " pod="kube-system/kindnet-sm2h2"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786415    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94lt\" (UniqueName: \"kubernetes.io/projected/10c4fa48-37ca-4164-83ef-7ab034f844a9-kube-api-access-c94lt\") pod \"kube-proxy-2vnjq\" (UID: \"10c4fa48-37ca-4164-83ef-7ab034f844a9\") " pod="kube-system/kube-proxy-2vnjq"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786436    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af4c3f2-8377-4a64-9499-502b9841a81d-xtables-lock\") pod \"kindnet-sm2h2\" (UID: \"1af4c3f2-8377-4a64-9499-502b9841a81d\") " pod="kube-system/kindnet-sm2h2"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: I1123 10:16:34.786493    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2m8\" (UniqueName: \"kubernetes.io/projected/1af4c3f2-8377-4a64-9499-502b9841a81d-kube-api-access-zc2m8\") pod \"kindnet-sm2h2\" (UID: \"1af4c3f2-8377-4a64-9499-502b9841a81d\") " pod="kube-system/kindnet-sm2h2"
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892760    1302 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892782    1302 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892817    1302 projected.go:196] Error preparing data for projected volume kube-api-access-zc2m8 for pod kube-system/kindnet-sm2h2: configmap "kube-root-ca.crt" not found
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892794    1302 projected.go:196] Error preparing data for projected volume kube-api-access-c94lt for pod kube-system/kube-proxy-2vnjq: configmap "kube-root-ca.crt" not found
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892897    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1af4c3f2-8377-4a64-9499-502b9841a81d-kube-api-access-zc2m8 podName:1af4c3f2-8377-4a64-9499-502b9841a81d nodeName:}" failed. No retries permitted until 2025-11-23 10:16:35.392867527 +0000 UTC m=+5.907731527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zc2m8" (UniqueName: "kubernetes.io/projected/1af4c3f2-8377-4a64-9499-502b9841a81d-kube-api-access-zc2m8") pod "kindnet-sm2h2" (UID: "1af4c3f2-8377-4a64-9499-502b9841a81d") : configmap "kube-root-ca.crt" not found
	Nov 23 10:16:34 embed-certs-412306 kubelet[1302]: E1123 10:16:34.892919    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10c4fa48-37ca-4164-83ef-7ab034f844a9-kube-api-access-c94lt podName:10c4fa48-37ca-4164-83ef-7ab034f844a9 nodeName:}" failed. No retries permitted until 2025-11-23 10:16:35.392904121 +0000 UTC m=+5.907768111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c94lt" (UniqueName: "kubernetes.io/projected/10c4fa48-37ca-4164-83ef-7ab034f844a9-kube-api-access-c94lt") pod "kube-proxy-2vnjq" (UID: "10c4fa48-37ca-4164-83ef-7ab034f844a9") : configmap "kube-root-ca.crt" not found
	Nov 23 10:16:36 embed-certs-412306 kubelet[1302]: I1123 10:16:36.610717    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vnjq" podStartSLOduration=2.610691191 podStartE2EDuration="2.610691191s" podCreationTimestamp="2025-11-23 10:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:36.610515738 +0000 UTC m=+7.125379748" watchObservedRunningTime="2025-11-23 10:16:36.610691191 +0000 UTC m=+7.125555202"
	Nov 23 10:16:36 embed-certs-412306 kubelet[1302]: I1123 10:16:36.635273    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sm2h2" podStartSLOduration=2.635251003 podStartE2EDuration="2.635251003s" podCreationTimestamp="2025-11-23 10:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:36.635161143 +0000 UTC m=+7.150025150" watchObservedRunningTime="2025-11-23 10:16:36.635251003 +0000 UTC m=+7.150115015"
	Nov 23 10:16:46 embed-certs-412306 kubelet[1302]: I1123 10:16:46.340211    1302 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:16:46 embed-certs-412306 kubelet[1302]: I1123 10:16:46.470217    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/199ec01f-2a64-4666-af02-cd1ad7ae4cc2-tmp\") pod \"storage-provisioner\" (UID: \"199ec01f-2a64-4666-af02-cd1ad7ae4cc2\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:46 embed-certs-412306 kubelet[1302]: I1123 10:16:46.470288    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbrc\" (UniqueName: \"kubernetes.io/projected/199ec01f-2a64-4666-af02-cd1ad7ae4cc2-kube-api-access-wxbrc\") pod \"storage-provisioner\" (UID: \"199ec01f-2a64-4666-af02-cd1ad7ae4cc2\") " pod="kube-system/storage-provisioner"
	Nov 23 10:16:46 embed-certs-412306 kubelet[1302]: I1123 10:16:46.470318    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b-config-volume\") pod \"coredns-66bc5c9577-fxl7j\" (UID: \"4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b\") " pod="kube-system/coredns-66bc5c9577-fxl7j"
	Nov 23 10:16:46 embed-certs-412306 kubelet[1302]: I1123 10:16:46.470349    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwgqw\" (UniqueName: \"kubernetes.io/projected/4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b-kube-api-access-lwgqw\") pod \"coredns-66bc5c9577-fxl7j\" (UID: \"4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b\") " pod="kube-system/coredns-66bc5c9577-fxl7j"
	Nov 23 10:16:47 embed-certs-412306 kubelet[1302]: I1123 10:16:47.645655    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.645633324 podStartE2EDuration="12.645633324s" podCreationTimestamp="2025-11-23 10:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:47.636019616 +0000 UTC m=+18.150883626" watchObservedRunningTime="2025-11-23 10:16:47.645633324 +0000 UTC m=+18.160497350"
	Nov 23 10:16:49 embed-certs-412306 kubelet[1302]: I1123 10:16:49.852732    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fxl7j" podStartSLOduration=14.852709214 podStartE2EDuration="14.852709214s" podCreationTimestamp="2025-11-23 10:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:16:47.645985766 +0000 UTC m=+18.160849775" watchObservedRunningTime="2025-11-23 10:16:49.852709214 +0000 UTC m=+20.367573222"
	Nov 23 10:16:49 embed-certs-412306 kubelet[1302]: I1123 10:16:49.892363    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2sc9\" (UniqueName: \"kubernetes.io/projected/5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e-kube-api-access-n2sc9\") pod \"busybox\" (UID: \"5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e\") " pod="default/busybox"
	Nov 23 10:16:53 embed-certs-412306 kubelet[1302]: I1123 10:16:53.654943    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.7048645470000001 podStartE2EDuration="4.654922339s" podCreationTimestamp="2025-11-23 10:16:49 +0000 UTC" firstStartedPulling="2025-11-23 10:16:50.1746946 +0000 UTC m=+20.689558594" lastFinishedPulling="2025-11-23 10:16:53.124752395 +0000 UTC m=+23.639616386" observedRunningTime="2025-11-23 10:16:53.654746508 +0000 UTC m=+24.169610518" watchObservedRunningTime="2025-11-23 10:16:53.654922339 +0000 UTC m=+24.169786348"
	
	
	==> storage-provisioner [2f9f73ac34cb4faead4f0cd8ee5ce0125c0885e813d4d1c2d42d3cb892a7f948] <==
	I1123 10:16:46.734332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:16:46.743798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:16:46.743843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:16:46.746250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:46.751306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:16:46.751550       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:16:46.751725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_98355a62-24ec-400e-b6bf-2a33e24c9e85!
	I1123 10:16:46.751751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6461001a-51cb-46e2-995d-2cc675b065ba", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-412306_98355a62-24ec-400e-b6bf-2a33e24c9e85 became leader
	W1123 10:16:46.754762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:46.757602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:16:46.852697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_98355a62-24ec-400e-b6bf-2a33e24c9e85!
	W1123 10:16:48.761546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:48.767751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:50.770647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:50.774313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:52.779140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:52.784726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:54.787629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:54.791281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:56.794136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:56.799493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:58.802955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:16:58.806528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-412306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-990757 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-990757 --alsologtostderr -v=1: exit status 80 (1.615103644s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-990757 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:18:07.118493  380605 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:07.118652  380605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:07.118665  380605 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:07.118671  380605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:07.118998  380605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:07.119306  380605 out.go:368] Setting JSON to false
	I1123 10:18:07.119336  380605 mustload.go:66] Loading cluster: old-k8s-version-990757
	I1123 10:18:07.120465  380605 config.go:182] Loaded profile config "old-k8s-version-990757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:18:07.121358  380605 cli_runner.go:164] Run: docker container inspect old-k8s-version-990757 --format={{.State.Status}}
	I1123 10:18:07.140382  380605 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:18:07.140753  380605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:07.200988  380605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-23 10:18:07.190656759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:07.201784  380605 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-990757 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:18:07.203603  380605 out.go:179] * Pausing node old-k8s-version-990757 ... 
	I1123 10:18:07.205384  380605 host.go:66] Checking if "old-k8s-version-990757" exists ...
	I1123 10:18:07.205746  380605 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:07.205816  380605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-990757
	I1123 10:18:07.227733  380605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/old-k8s-version-990757/id_rsa Username:docker}
	I1123 10:18:07.330348  380605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:07.361963  380605 pause.go:52] kubelet running: true
	I1123 10:18:07.362042  380605 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:07.518046  380605 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:07.518192  380605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:07.588580  380605 cri.go:89] found id: "9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af"
	I1123 10:18:07.588607  380605 cri.go:89] found id: "a66dd032f72a291c4b9137f10802d9fbf947163ac4ec744f05cff426d166d072"
	I1123 10:18:07.588612  380605 cri.go:89] found id: "7d2173a013595020de9a41e415a6a98ae7dc0077b210812ebda0b0af5473a287"
	I1123 10:18:07.588615  380605 cri.go:89] found id: "cbaeadd56435f3be2e882ca71a5e4c2a576610a12fea8a213be3214b68289f60"
	I1123 10:18:07.588618  380605 cri.go:89] found id: "c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f"
	I1123 10:18:07.588639  380605 cri.go:89] found id: "556e97942a390024b57d00ce6d2dab22e5234986f456ccd01a8426510bf12dc2"
	I1123 10:18:07.588642  380605 cri.go:89] found id: "674b4af1a0427bfaca38a9f2c3d8e894dc1b8e4c4bdb0b56c34b4ab06cffe9a1"
	I1123 10:18:07.588645  380605 cri.go:89] found id: "c9e0d8276aa071eee136baabda6e6268adcd34c9a47ea98e77308ea23679b766"
	I1123 10:18:07.588649  380605 cri.go:89] found id: "ebac26e4ce8f31e1b8f09e6ec06a5c05e6707bb591cc39abd93e16c3ee829fcc"
	I1123 10:18:07.588665  380605 cri.go:89] found id: "23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	I1123 10:18:07.588673  380605 cri.go:89] found id: "ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387"
	I1123 10:18:07.588675  380605 cri.go:89] found id: ""
	I1123 10:18:07.588724  380605 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:07.600943  380605 retry.go:31] will retry after 247.447085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:07Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:07.849491  380605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:07.863184  380605 pause.go:52] kubelet running: false
	I1123 10:18:07.863263  380605 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:08.004250  380605 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:08.004329  380605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:08.076570  380605 cri.go:89] found id: "9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af"
	I1123 10:18:08.076593  380605 cri.go:89] found id: "a66dd032f72a291c4b9137f10802d9fbf947163ac4ec744f05cff426d166d072"
	I1123 10:18:08.076597  380605 cri.go:89] found id: "7d2173a013595020de9a41e415a6a98ae7dc0077b210812ebda0b0af5473a287"
	I1123 10:18:08.076600  380605 cri.go:89] found id: "cbaeadd56435f3be2e882ca71a5e4c2a576610a12fea8a213be3214b68289f60"
	I1123 10:18:08.076603  380605 cri.go:89] found id: "c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f"
	I1123 10:18:08.076607  380605 cri.go:89] found id: "556e97942a390024b57d00ce6d2dab22e5234986f456ccd01a8426510bf12dc2"
	I1123 10:18:08.076610  380605 cri.go:89] found id: "674b4af1a0427bfaca38a9f2c3d8e894dc1b8e4c4bdb0b56c34b4ab06cffe9a1"
	I1123 10:18:08.076612  380605 cri.go:89] found id: "c9e0d8276aa071eee136baabda6e6268adcd34c9a47ea98e77308ea23679b766"
	I1123 10:18:08.076615  380605 cri.go:89] found id: "ebac26e4ce8f31e1b8f09e6ec06a5c05e6707bb591cc39abd93e16c3ee829fcc"
	I1123 10:18:08.076627  380605 cri.go:89] found id: "23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	I1123 10:18:08.076629  380605 cri.go:89] found id: "ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387"
	I1123 10:18:08.076632  380605 cri.go:89] found id: ""
	I1123 10:18:08.076678  380605 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:08.089736  380605 retry.go:31] will retry after 306.899341ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:08Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:08.397332  380605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:08.410754  380605 pause.go:52] kubelet running: false
	I1123 10:18:08.410817  380605 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:08.567543  380605 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:08.567640  380605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:08.642252  380605 cri.go:89] found id: "9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af"
	I1123 10:18:08.642301  380605 cri.go:89] found id: "a66dd032f72a291c4b9137f10802d9fbf947163ac4ec744f05cff426d166d072"
	I1123 10:18:08.642308  380605 cri.go:89] found id: "7d2173a013595020de9a41e415a6a98ae7dc0077b210812ebda0b0af5473a287"
	I1123 10:18:08.642313  380605 cri.go:89] found id: "cbaeadd56435f3be2e882ca71a5e4c2a576610a12fea8a213be3214b68289f60"
	I1123 10:18:08.642318  380605 cri.go:89] found id: "c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f"
	I1123 10:18:08.642322  380605 cri.go:89] found id: "556e97942a390024b57d00ce6d2dab22e5234986f456ccd01a8426510bf12dc2"
	I1123 10:18:08.642327  380605 cri.go:89] found id: "674b4af1a0427bfaca38a9f2c3d8e894dc1b8e4c4bdb0b56c34b4ab06cffe9a1"
	I1123 10:18:08.642331  380605 cri.go:89] found id: "c9e0d8276aa071eee136baabda6e6268adcd34c9a47ea98e77308ea23679b766"
	I1123 10:18:08.642335  380605 cri.go:89] found id: "ebac26e4ce8f31e1b8f09e6ec06a5c05e6707bb591cc39abd93e16c3ee829fcc"
	I1123 10:18:08.642355  380605 cri.go:89] found id: "23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	I1123 10:18:08.642360  380605 cri.go:89] found id: "ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387"
	I1123 10:18:08.642364  380605 cri.go:89] found id: ""
	I1123 10:18:08.642408  380605 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:08.658688  380605 out.go:203] 
	W1123 10:18:08.659959  380605 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:18:08.659982  380605 out.go:285] * 
	* 
	W1123 10:18:08.667984  380605 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:18:08.669364  380605 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-990757 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-990757
helpers_test.go:243: (dbg) docker inspect old-k8s-version-990757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	        "Created": "2025-11-23T10:15:48.885853944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:03.88077584Z",
	            "FinishedAt": "2025-11-23T10:17:02.949192527Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0-json.log",
	        "Name": "/old-k8s-version-990757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-990757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-990757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	                "LowerDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-990757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-990757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-990757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04c3e56e5f77c804f160ce18ac68cf438f5dbeb62ac14c22e2394d80dc4c3c0b",
	            "SandboxKey": "/var/run/docker/netns/04c3e56e5f77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-990757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "052388d40ecf9cf5a4a04b634ec5fc574a97435df4a8b65c1a426a6b8091971d",
	                    "EndpointID": "bd29407e3a0ea6f19bf8b2c1821256775e648599e2d867de641f0af82c1a561d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "f2:ee:64:b1:09:8c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-990757",
	                        "fd35c6e2de37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757: exit status 2 (361.006236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25: (1.259008391s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-791161 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo docker system info                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cri-dockerd --version                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                        │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                          │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                          │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:17:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:17:19.609492  373797 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:17:19.609729  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609737  373797 out.go:374] Setting ErrFile to fd 2...
	I1123 10:17:19.609741  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609928  373797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:17:19.610361  373797 out.go:368] Setting JSON to false
	I1123 10:17:19.611590  373797 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10781,"bootTime":1763882259,"procs":496,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:17:19.611646  373797 start.go:143] virtualization: kvm guest
	I1123 10:17:19.613670  373797 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:17:19.614888  373797 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:17:19.614881  373797 notify.go:221] Checking for updates...
	I1123 10:17:19.616064  373797 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:17:19.617045  373797 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:19.617927  373797 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:17:19.618967  373797 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:17:19.619935  373797 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:17:19.621299  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:19.621911  373797 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:17:19.648614  373797 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:17:19.648746  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.710021  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.699419611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.710161  373797 docker.go:319] overlay module found
	I1123 10:17:19.712107  373797 out.go:179] * Using the docker driver based on existing profile
	I1123 10:17:19.713258  373797 start.go:309] selected driver: docker
	I1123 10:17:19.713275  373797 start.go:927] validating driver "docker" against &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.713374  373797 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:17:19.713898  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.779691  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.765216478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.779989  373797 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:19.780023  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:19.780080  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:19.780271  373797 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.782420  373797 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:17:19.783638  373797 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:17:19.785045  373797 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:17:19.786269  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:19.786307  373797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:17:19.786316  373797 cache.go:65] Caching tarball of preloaded images
	I1123 10:17:19.786372  373797 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:17:19.786421  373797 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:17:19.786437  373797 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:17:19.786558  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:19.811595  373797 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:17:19.811627  373797 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:17:19.811673  373797 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:17:19.811717  373797 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:17:19.811792  373797 start.go:364] duration metric: took 48.053µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:17:19.811817  373797 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:17:19.811827  373797 fix.go:54] fixHost starting: 
	I1123 10:17:19.812155  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:19.832074  373797 fix.go:112] recreateIfNeeded on embed-certs-412306: state=Stopped err=<nil>
	W1123 10:17:19.832132  373797 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:17:18.495023  371192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:18.495055  371192 machine.go:97] duration metric: took 5.084691596s to provisionDockerMachine
	I1123 10:17:18.495069  371192 start.go:293] postStartSetup for "no-preload-541522" (driver="docker")
	I1123 10:17:18.495082  371192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:18.495215  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:18.495278  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.522688  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.634392  371192 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:18.638904  371192 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:18.638946  371192 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:18.638961  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:18.639015  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:18.639129  371192 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:18.639289  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:18.650865  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:18.676275  371192 start.go:296] duration metric: took 181.188377ms for postStartSetup
	I1123 10:17:18.676398  371192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:18.676447  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.696551  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.798813  371192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:18.804200  371192 fix.go:56] duration metric: took 5.847399025s for fixHost
	I1123 10:17:18.804227  371192 start.go:83] releasing machines lock for "no-preload-541522", held for 5.847449946s
	I1123 10:17:18.804314  371192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-541522
	I1123 10:17:18.823965  371192 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:18.824026  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.824050  371192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:18.824151  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.846278  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.847666  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:19.015957  371192 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:19.023883  371192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:19.072321  371192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:19.078795  371192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:19.078868  371192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:19.088538  371192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:19.088566  371192 start.go:496] detecting cgroup driver to use...
	I1123 10:17:19.088600  371192 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:19.088643  371192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:19.110539  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:19.132949  371192 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:19.133028  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:19.150165  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:19.165619  371192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:19.271465  371192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:19.379873  371192 docker.go:234] disabling docker service ...
	I1123 10:17:19.379932  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:19.398139  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:19.412992  371192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:19.503640  371192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:19.600343  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:19.613822  371192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:19.629382  371192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:19.629446  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.640465  371192 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:19.640529  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.651535  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.661697  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.674338  371192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:19.684964  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.697156  371192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.707055  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.717460  371192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:19.725865  371192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:19.736523  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:19.829013  371192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:19.984026  371192 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:19.984148  371192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:19.988801  371192 start.go:564] Will wait 60s for crictl version
	I1123 10:17:19.988866  371192 ssh_runner.go:195] Run: which crictl
	I1123 10:17:19.993024  371192 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:20.026159  371192 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:20.026262  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.057945  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.092537  371192 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:20.095052  371192 cli_runner.go:164] Run: docker network inspect no-preload-541522 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:20.113293  371192 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:20.117900  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.129916  371192 kubeadm.go:884] updating cluster {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:20.130038  371192 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:20.130098  371192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:20.168390  371192 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:20.168418  371192 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:20.168427  371192 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:20.168553  371192 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-541522 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:20.168646  371192 ssh_runner.go:195] Run: crio config
	I1123 10:17:20.221690  371192 cni.go:84] Creating CNI manager for ""
	I1123 10:17:20.221718  371192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:20.221739  371192 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:20.221769  371192 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-541522 NodeName:no-preload-541522 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:20.221955  371192 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-541522"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:20.222044  371192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:20.231152  371192 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:20.231287  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:20.240306  371192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:17:20.253726  371192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:20.268663  371192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 10:17:20.286013  371192 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:20.290286  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.301340  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:20.405447  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:20.425508  371192 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522 for IP: 192.168.85.2
	I1123 10:17:20.425698  371192 certs.go:195] generating shared ca certs ...
	I1123 10:17:20.425746  371192 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:20.425993  371192 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:20.426072  371192 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:20.426083  371192 certs.go:257] generating profile certs ...
	I1123 10:17:20.426244  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/client.key
	I1123 10:17:20.426355  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key.29b5f89d
	I1123 10:17:20.426438  371192 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key
	I1123 10:17:20.426605  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:20.426644  371192 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:20.426655  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:20.426693  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:20.426725  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:20.426756  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:20.426822  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:20.428032  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:20.456018  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:20.479658  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:20.501657  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:20.529181  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:17:20.550509  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:20.569511  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:20.588713  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:20.606754  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:20.625365  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:20.644697  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:20.662851  371192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:20.675998  371192 ssh_runner.go:195] Run: openssl version
	I1123 10:17:20.682347  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:20.691464  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695411  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695463  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.730632  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:20.739401  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:20.748466  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752659  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752735  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.788588  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:20.797604  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:20.806894  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811228  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811284  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.846328  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:20.855328  371192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:20.859478  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:20.893578  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:20.929466  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:20.977899  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:21.020876  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:21.070653  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:21.123318  371192 kubeadm.go:401] StartCluster: {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:21.123410  371192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:21.123464  371192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:21.157433  371192 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:17:21.157457  371192 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:17:21.157464  371192 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:17:21.157469  371192 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:17:21.157473  371192 cri.go:89] found id: ""
	I1123 10:17:21.157519  371192 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:21.170853  371192 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:21.170942  371192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:21.179761  371192 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:21.179782  371192 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:21.179832  371192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:21.188635  371192 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:21.189189  371192 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-541522" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.189463  371192 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-541522" cluster setting kubeconfig missing "no-preload-541522" context setting]
	I1123 10:17:21.190011  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.191382  371192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:21.200134  371192 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:17:21.200165  371192 kubeadm.go:602] duration metric: took 20.377182ms to restartPrimaryControlPlane
	I1123 10:17:21.200176  371192 kubeadm.go:403] duration metric: took 76.869746ms to StartCluster
	I1123 10:17:21.200197  371192 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.200268  371192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.201522  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.201810  371192 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:21.201858  371192 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:21.201968  371192 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:17:21.201995  371192 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	W1123 10:17:21.202008  371192 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:21.202006  371192 addons.go:70] Setting dashboard=true in profile "no-preload-541522"
	I1123 10:17:21.202029  371192 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:17:21.202053  371192 addons.go:239] Setting addon dashboard=true in "no-preload-541522"
	I1123 10:17:21.202055  371192 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	W1123 10:17:21.202063  371192 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:21.202081  371192 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:21.202038  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202110  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202447  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202598  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202660  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.204706  371192 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:21.206052  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:21.227863  371192 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	W1123 10:17:21.227926  371192 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:21.227956  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.228549  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.232585  371192 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:21.232585  371192 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:21.233696  371192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.233729  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:21.233799  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.233705  371192 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:21.234809  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:21.234828  371192 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:21.234890  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.265221  371192 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.265260  371192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:21.265326  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.274943  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.276965  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.296189  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.367731  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:21.382397  371192 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:21.398915  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.401528  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:21.401552  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:21.419867  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:21.419897  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:21.422575  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.439431  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:21.439464  371192 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:21.459190  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:21.459215  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:21.474803  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:21.474837  371192 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:21.490492  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:21.490520  371192 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:21.504992  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:21.505017  371192 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:21.519429  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:21.519456  371192 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:21.533295  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:21.533322  371192 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:21.550435  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:18.396407  371315 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.434126085s)
	I1123 10:17:18.396438  371315 kic.go:203] duration metric: took 4.434295488s to extract preloaded images to volume ...
	W1123 10:17:18.396521  371315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:17:18.396560  371315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:17:18.396604  371315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:17:18.463256  371315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-772252 --name default-k8s-diff-port-772252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --network default-k8s-diff-port-772252 --ip 192.168.103.2 --volume default-k8s-diff-port-772252:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:17:18.796638  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Running}}
	I1123 10:17:18.816868  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:18.840858  371315 cli_runner.go:164] Run: docker exec default-k8s-diff-port-772252 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:17:18.897619  371315 oci.go:144] the created container "default-k8s-diff-port-772252" has a running status.
	I1123 10:17:18.897661  371315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa...
	I1123 10:17:18.977365  371315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:17:19.006386  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.030565  371315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:17:19.030591  371315 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-772252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:17:19.079641  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.103668  371315 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:19.103794  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:19.133387  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:19.134363  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:19.134412  371315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:19.135234  371315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54846->127.0.0.1:33113: read: connection reset by peer
	I1123 10:17:22.290470  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.290505  371315 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772252"
	I1123 10:17:22.290581  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.310197  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.310489  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.310506  371315 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772252 && echo "default-k8s-diff-port-772252" | sudo tee /etc/hostname
	I1123 10:17:22.471190  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.471288  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.491303  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.491559  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.491595  371315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:22.649053  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:22.649118  371315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:22.649148  371315 ubuntu.go:190] setting up certificates
	I1123 10:17:22.649175  371315 provision.go:84] configureAuth start
	I1123 10:17:22.649268  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:22.670533  371315 provision.go:143] copyHostCerts
	I1123 10:17:22.670621  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:22.670640  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:22.670723  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:22.670844  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:22.670855  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:22.670899  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:22.671009  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:22.671020  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:22.671063  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:22.671173  371315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772252 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-772252 localhost minikube]
	I1123 10:17:22.781341  371315 provision.go:177] copyRemoteCerts
	I1123 10:17:22.781420  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:22.781468  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.813351  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:22.707516  371192 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:17:22.707555  371192 node_ready.go:38] duration metric: took 1.325107134s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:22.707572  371192 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:22.707865  371192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:23.284024  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.885050693s)
	I1123 10:17:23.284105  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861477632s)
	I1123 10:17:23.284235  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.733760656s)
	I1123 10:17:23.284398  371192 api_server.go:72] duration metric: took 2.082551658s to wait for apiserver process to appear ...
	I1123 10:17:23.284414  371192 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:23.284434  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.286130  371192 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-541522 addons enable metrics-server
	
	I1123 10:17:23.289610  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.289631  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:23.292533  371192 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1123 10:17:20.914139  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:22.914473  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:19.834110  373797 out.go:252] * Restarting existing docker container for "embed-certs-412306" ...
	I1123 10:17:19.834184  373797 cli_runner.go:164] Run: docker start embed-certs-412306
	I1123 10:17:20.130659  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:20.150941  373797 kic.go:430] container "embed-certs-412306" state is running.
	I1123 10:17:20.151437  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:20.172969  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:20.173319  373797 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:20.173400  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:20.193884  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:20.194212  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:20.194231  373797 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:20.195045  373797 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48678->127.0.0.1:33118: read: connection reset by peer
	I1123 10:17:23.348386  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.348432  373797 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:17:23.348510  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.369008  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.369294  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.369309  373797 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:17:23.527808  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.527905  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.552954  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.553243  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.553263  373797 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:23.705470  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:23.705501  373797 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:23.705547  373797 ubuntu.go:190] setting up certificates
	I1123 10:17:23.705570  373797 provision.go:84] configureAuth start
	I1123 10:17:23.705648  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:23.727746  373797 provision.go:143] copyHostCerts
	I1123 10:17:23.727819  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:23.727834  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:23.727904  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:23.728152  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:23.728170  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:23.728229  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:23.728394  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:23.728408  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:23.728442  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:23.728545  373797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:17:23.786003  373797 provision.go:177] copyRemoteCerts
	I1123 10:17:23.786110  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:23.786168  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.808607  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:23.930337  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:23.954195  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:17:23.973335  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1123 10:17:23.992599  373797 provision.go:87] duration metric: took 287.009489ms to configureAuth
	I1123 10:17:23.992633  373797 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:23.992827  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:23.992947  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.015952  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.016359  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:24.016396  373797 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:24.382671  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:24.382710  373797 machine.go:97] duration metric: took 4.209367018s to provisionDockerMachine
	I1123 10:17:24.382728  373797 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:17:24.382754  373797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:24.382834  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:24.382885  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.404505  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.511869  373797 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:24.516166  373797 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:24.516207  373797 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:24.516222  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:24.516280  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:24.516393  373797 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:24.516518  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:24.524244  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:24.542545  373797 start.go:296] duration metric: took 159.79015ms for postStartSetup
	I1123 10:17:24.542619  373797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:24.542668  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.563717  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:22.926511  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:22.950745  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:17:22.971167  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:17:22.992406  371315 provision.go:87] duration metric: took 343.209444ms to configureAuth
	I1123 10:17:22.992440  371315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:22.992638  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:22.992764  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.015449  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.015746  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:23.015770  371315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:23.334757  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:23.334787  371315 machine.go:97] duration metric: took 4.23109286s to provisionDockerMachine
	I1123 10:17:23.334800  371315 client.go:176] duration metric: took 10.163153814s to LocalClient.Create
	I1123 10:17:23.334826  371315 start.go:167] duration metric: took 10.163248519s to libmachine.API.Create "default-k8s-diff-port-772252"
	I1123 10:17:23.334840  371315 start.go:293] postStartSetup for "default-k8s-diff-port-772252" (driver="docker")
	I1123 10:17:23.334860  371315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:23.334929  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:23.334985  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.356328  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.463374  371315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:23.467492  371315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:23.467528  371315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:23.467542  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:23.467604  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:23.467697  371315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:23.467820  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:23.475956  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:23.497077  371315 start.go:296] duration metric: took 162.21628ms for postStartSetup
	I1123 10:17:23.497453  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.517994  371315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/config.json ...
	I1123 10:17:23.518317  371315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:23.518376  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.544356  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.649434  371315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:23.654312  371315 start.go:128] duration metric: took 10.487060831s to createHost
	I1123 10:17:23.654340  371315 start.go:83] releasing machines lock for "default-k8s-diff-port-772252", held for 10.487196123s
	I1123 10:17:23.654429  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.672341  371315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:23.672366  371315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:23.672402  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.672450  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.692134  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.692271  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.884469  371315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:23.894358  371315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:23.951450  371315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:23.956897  371315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:23.956984  371315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:23.983807  371315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:17:23.983830  371315 start.go:496] detecting cgroup driver to use...
	I1123 10:17:23.983859  371315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:23.983898  371315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.001497  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.017078  371315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.017175  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.033394  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:24.052236  371315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:24.146681  371315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:24.245622  371315 docker.go:234] disabling docker service ...
	I1123 10:17:24.245695  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:24.267262  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:24.283984  371315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:24.393614  371315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:24.485577  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:24.498373  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:24.513700  371315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:24.513745  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.524969  371315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:24.525040  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.534062  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.543449  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.552383  371315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:24.562139  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.572184  371315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.587719  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.597575  371315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:24.606824  371315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:24.615535  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:24.700246  371315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:24.855040  371315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:24.855123  371315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:24.859368  371315 start.go:564] Will wait 60s for crictl version
	I1123 10:17:24.859428  371315 ssh_runner.go:195] Run: which crictl
	I1123 10:17:24.863070  371315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:24.889521  371315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:24.889599  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.920115  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.954417  371315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.666037  373797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:24.670358  373797 fix.go:56] duration metric: took 4.858524746s for fixHost
	I1123 10:17:24.670382  373797 start.go:83] releasing machines lock for "embed-certs-412306", held for 4.858576755s
	I1123 10:17:24.670445  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:24.688334  373797 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:24.688391  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.688402  373797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:24.688482  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.708037  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.709542  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.881767  373797 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:24.889568  373797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:24.928028  373797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:24.933463  373797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:24.933545  373797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:24.944053  373797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:24.944096  373797 start.go:496] detecting cgroup driver to use...
	I1123 10:17:24.944134  373797 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:24.944176  373797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.961024  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.975672  373797 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.975755  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.992860  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:25.007660  373797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:25.101571  373797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:25.187706  373797 docker.go:234] disabling docker service ...
	I1123 10:17:25.187771  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:25.203871  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:25.220342  373797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:25.310358  373797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:25.403221  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:25.417018  373797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:25.431507  373797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:25.431564  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.441415  373797 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:25.441481  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.450871  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.459923  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.468817  373797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:25.477361  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.487848  373797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.496857  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.506275  373797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:25.514119  373797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:25.522214  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.609285  373797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:25.788628  373797 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:25.788710  373797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:25.794577  373797 start.go:564] Will wait 60s for crictl version
	I1123 10:17:25.794647  373797 ssh_runner.go:195] Run: which crictl
	I1123 10:17:25.801054  373797 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:25.830537  373797 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:25.830618  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.862137  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.896309  373797 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.955476  371315 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:24.975771  371315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:24.980312  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:24.992335  371315 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:24.992470  371315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:24.992532  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.028422  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.028446  371315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.028507  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.062707  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.062731  371315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.062740  371315 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1123 10:17:25.062842  371315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.062921  371315 ssh_runner.go:195] Run: crio config
	I1123 10:17:25.111817  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:25.111854  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:25.111873  371315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:25.111897  371315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772252 NodeName:default-k8s-diff-port-772252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:25.112030  371315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:25.112105  371315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:25.120360  371315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:25.120421  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:25.129795  371315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1123 10:17:25.145251  371315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:25.160692  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1123 10:17:25.173307  371315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:25.177001  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.187493  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.282599  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:25.306664  371315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252 for IP: 192.168.103.2
	I1123 10:17:25.306684  371315 certs.go:195] generating shared ca certs ...
	I1123 10:17:25.306700  371315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.306864  371315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:25.306920  371315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:25.306934  371315 certs.go:257] generating profile certs ...
	I1123 10:17:25.307023  371315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key
	I1123 10:17:25.307042  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt with IP's: []
	I1123 10:17:25.369960  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt ...
	I1123 10:17:25.369988  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt: {Name:mk7f4719b240e51f803a30c22478d2cf1d0e1199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370175  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key ...
	I1123 10:17:25.370199  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key: {Name:mkd811194a7ece5d786aacc912a42bc560ea4296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370292  371315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1
	I1123 10:17:25.370312  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 10:17:25.423997  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 ...
	I1123 10:17:25.424030  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1: {Name:mk6de12f0748b003728065f4169ec8bcc4410f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424186  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 ...
	I1123 10:17:25.424201  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1: {Name:mkfeca4687eb3d49033d88eae184a2c0e40ab44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424294  371315 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt
	I1123 10:17:25.424406  371315 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key
	I1123 10:17:25.424489  371315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key
	I1123 10:17:25.424508  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt with IP's: []
	I1123 10:17:25.484984  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt ...
	I1123 10:17:25.485010  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt: {Name:mkc9c6bf8ac400416e9eb1893c09433f60578057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485213  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key ...
	I1123 10:17:25.485235  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key: {Name:mk504063bf5acfe6751f65cfaba17411b52827e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485488  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:25.485543  371315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:25.485559  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:25.485600  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:25.485631  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:25.485652  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:25.485702  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:25.486510  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:25.505646  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:25.524124  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:25.543811  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:25.568526  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:17:25.588007  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:25.606546  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:25.626591  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:17:25.647854  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:25.673928  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:25.698071  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:25.717953  371315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:25.733564  371315 ssh_runner.go:195] Run: openssl version
	I1123 10:17:25.743071  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:25.755937  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762383  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762464  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.817928  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:25.829386  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:25.840669  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845206  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845259  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.884816  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:25.895209  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:25.905009  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909147  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909212  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.947660  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:25.958547  371315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:25.963329  371315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:17:25.963400  371315 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:25.963515  371315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:25.963592  371315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:25.994552  371315 cri.go:89] found id: ""
	I1123 10:17:25.994632  371315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:26.004720  371315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:17:26.014394  371315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:17:26.014465  371315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:17:26.023894  371315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:17:26.023927  371315 kubeadm.go:158] found existing configuration files:
	
	I1123 10:17:26.023984  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 10:17:26.032407  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:17:26.032468  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:17:26.041623  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 10:17:26.054201  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:17:26.054261  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:17:26.066701  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.079955  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:17:26.080191  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.093784  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 10:17:26.105549  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:17:26.105617  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:17:26.115532  371315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:17:26.160623  371315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:17:26.160969  371315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:17:26.186117  371315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:17:26.186236  371315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:17:26.186285  371315 kubeadm.go:319] OS: Linux
	I1123 10:17:26.186354  371315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:17:26.186447  371315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:17:26.186539  371315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:17:26.186616  371315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:17:26.186682  371315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:17:26.186746  371315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:17:26.186824  371315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:17:26.186884  371315 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:17:26.263125  371315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:17:26.263295  371315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:17:26.263483  371315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:17:26.272376  371315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:17:25.897306  373797 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:25.917131  373797 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:25.921503  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.932797  373797 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:25.932962  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:25.933022  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.971485  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.971507  373797 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.971565  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.998401  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.998430  373797 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.998439  373797 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:25.998565  373797 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.998651  373797 ssh_runner.go:195] Run: crio config
	I1123 10:17:26.054182  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:26.054212  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:26.054230  373797 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:26.054261  373797 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:26.054449  373797 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:26.054528  373797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:26.069247  373797 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:26.069315  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:26.084536  373797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:17:26.105237  373797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:26.122042  373797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:17:26.135463  373797 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:26.139894  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:26.152470  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:26.259400  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:26.293349  373797 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:17:26.293376  373797 certs.go:195] generating shared ca certs ...
	I1123 10:17:26.293398  373797 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:26.293563  373797 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:26.293621  373797 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:26.293631  373797 certs.go:257] generating profile certs ...
	I1123 10:17:26.293719  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:17:26.293765  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:17:26.293798  373797 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:17:26.293962  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:26.294032  373797 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:26.294043  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:26.294080  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:26.294150  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:26.294182  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:26.294239  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:26.295078  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:26.319354  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:26.346624  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:26.375357  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:26.408580  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:17:26.438245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:17:26.463452  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:26.491192  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:26.535358  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:26.564257  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:26.589245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:26.615973  373797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:26.634980  373797 ssh_runner.go:195] Run: openssl version
	I1123 10:17:26.643923  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:26.658008  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663894  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663963  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.725019  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:26.741335  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:26.754306  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760205  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760289  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.817066  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:26.828242  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:26.840286  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845608  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845667  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.907823  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:26.920712  373797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:26.926906  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:26.993735  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:27.067117  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:27.144625  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:27.218572  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:27.280794  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:27.347949  373797 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:27.348439  373797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:27.348547  373797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:27.395884  373797 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:17:27.395917  373797 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:17:27.395924  373797 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:17:27.395929  373797 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:17:27.395933  373797 cri.go:89] found id: ""
	I1123 10:17:27.395979  373797 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:27.419845  373797 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:27Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:27.419963  373797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:27.439378  373797 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:27.439398  373797 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:27.439448  373797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:27.451084  373797 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:27.451946  373797 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-412306" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.452494  373797 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-412306" cluster setting kubeconfig missing "embed-certs-412306" context setting]
	I1123 10:17:27.453585  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.455654  373797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:27.467125  373797 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 10:17:27.467282  373797 kubeadm.go:602] duration metric: took 27.876451ms to restartPrimaryControlPlane
	I1123 10:17:27.467296  373797 kubeadm.go:403] duration metric: took 119.360738ms to StartCluster
	I1123 10:17:27.467315  373797 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.467483  373797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.469463  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.470000  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:27.470115  373797 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:27.470204  373797 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:17:27.470221  373797 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	W1123 10:17:27.470228  373797 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:27.470273  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.470801  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471054  373797 addons.go:70] Setting dashboard=true in profile "embed-certs-412306"
	I1123 10:17:27.471072  373797 addons.go:239] Setting addon dashboard=true in "embed-certs-412306"
	W1123 10:17:27.471080  373797 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:27.471255  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.471727  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471889  373797 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:17:27.471907  373797 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:17:27.472219  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.472422  373797 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:27.474200  373797 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:27.475292  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:27.502438  373797 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:27.503728  373797 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.503754  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:27.503822  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.506369  373797 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	W1123 10:17:27.506905  373797 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:27.506973  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.507482  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.520746  373797 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:27.522141  373797 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:23.293716  371192 addons.go:530] duration metric: took 2.091867033s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:17:23.784999  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.789545  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.789569  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:24.285244  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:24.290382  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:17:24.291908  371192 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:24.291943  371192 api_server.go:131] duration metric: took 1.007520894s to wait for apiserver health ...
	I1123 10:17:24.291958  371192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:24.295996  371192 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:24.296039  371192 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.296051  371192 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.296061  371192 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.296079  371192 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.296121  371192 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.296136  371192 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.296144  371192 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.296159  371192 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.296167  371192 system_pods.go:74] duration metric: took 4.202627ms to wait for pod list to return data ...
	I1123 10:17:24.296176  371192 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:24.298844  371192 default_sa.go:45] found service account: "default"
	I1123 10:17:24.298867  371192 default_sa.go:55] duration metric: took 2.684141ms for default service account to be created ...
	I1123 10:17:24.298878  371192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:24.301765  371192 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:24.301800  371192 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.301814  371192 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.301825  371192 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.301839  371192 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.301852  371192 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.301865  371192 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.301877  371192 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.301893  371192 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.301907  371192 system_pods.go:126] duration metric: took 3.021865ms to wait for k8s-apps to be running ...
	I1123 10:17:24.301921  371192 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:24.301973  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:24.318330  371192 system_svc.go:56] duration metric: took 16.399439ms WaitForService to wait for kubelet
	I1123 10:17:24.318363  371192 kubeadm.go:587] duration metric: took 3.1165169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:24.318385  371192 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:24.322994  371192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:24.323037  371192 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:24.323054  371192 node_conditions.go:105] duration metric: took 4.663725ms to run NodePressure ...
	I1123 10:17:24.323070  371192 start.go:242] waiting for startup goroutines ...
	I1123 10:17:24.323078  371192 start.go:247] waiting for cluster config update ...
	I1123 10:17:24.323103  371192 start.go:256] writing updated cluster config ...
	I1123 10:17:24.323457  371192 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:24.329879  371192 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:24.335776  371192 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:26.342596  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:26.275186  371315 out.go:252]   - Generating certificates and keys ...
	I1123 10:17:26.275352  371315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:17:26.275478  371315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:17:27.203820  371315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:17:27.842679  371315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1123 10:17:25.414040  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:27.423694  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:27.523106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:27.523125  373797 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:27.523187  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.544410  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.546884  373797 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.546911  373797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:27.547054  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.554028  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.584494  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.729896  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:27.729923  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:27.730389  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:27.748713  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.762305  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:27.762345  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:27.773616  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.783643  373797 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:27.816165  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:27.816196  373797 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:27.853683  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:27.853715  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:27.895194  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:27.895222  373797 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:27.929349  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:27.929380  373797 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:27.952056  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:27.952129  373797 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:27.972228  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:27.972259  373797 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:27.995106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:27.995291  373797 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:28.022880  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:30.169450  373797 node_ready.go:49] node "embed-certs-412306" is "Ready"
	I1123 10:17:30.169488  373797 node_ready.go:38] duration metric: took 2.385791286s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:30.169508  373797 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:30.169570  373797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:30.263935  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.515175318s)
	I1123 10:17:30.844237  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.070570716s)
	I1123 10:17:30.844367  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.821379534s)
	I1123 10:17:30.844403  373797 api_server.go:72] duration metric: took 3.371939039s to wait for apiserver process to appear ...
	I1123 10:17:30.844420  373797 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:30.844441  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:30.846035  373797 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-412306 addons enable metrics-server
	
	I1123 10:17:30.847355  373797 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:17:28.139930  371315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:17:28.712709  371315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:17:28.816265  371315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:17:28.816782  371315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.335727  371315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:17:29.335950  371315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.643887  371315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:17:30.187228  371315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:17:30.521995  371315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:17:30.522113  371315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:17:30.784711  371315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:17:31.090260  371315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:17:31.313967  371315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:17:31.369836  371315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:17:31.747785  371315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:17:31.748584  371315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:17:31.753537  371315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 10:17:28.348145  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:30.843172  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:31.754796  371315 out.go:252]   - Booting up control plane ...
	I1123 10:17:31.754943  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:17:31.755055  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:17:31.755934  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:17:31.779002  371315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:17:31.779431  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:17:31.788946  371315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:17:31.789330  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:17:31.789392  371315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:17:31.939409  371315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:17:31.939585  371315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:17:29.940244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:32.465244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:30.848716  373797 addons.go:530] duration metric: took 3.378601039s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:17:30.850138  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:30.850165  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.345352  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.353137  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:31.353176  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.844492  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.850813  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 10:17:31.852077  373797 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:31.852127  373797 api_server.go:131] duration metric: took 1.007698573s to wait for apiserver health ...
	I1123 10:17:31.852139  373797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:31.855854  373797 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:31.855888  373797 system_pods.go:61] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.855899  373797 system_pods.go:61] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.855905  373797 system_pods.go:61] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.855914  373797 system_pods.go:61] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.855923  373797 system_pods.go:61] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.855929  373797 system_pods.go:61] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.855939  373797 system_pods.go:61] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.855944  373797 system_pods.go:61] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.855952  373797 system_pods.go:74] duration metric: took 3.805802ms to wait for pod list to return data ...
	I1123 10:17:31.855961  373797 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:31.858650  373797 default_sa.go:45] found service account: "default"
	I1123 10:17:31.858679  373797 default_sa.go:55] duration metric: took 2.711408ms for default service account to be created ...
	I1123 10:17:31.858690  373797 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:31.862049  373797 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:31.862079  373797 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.862105  373797 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.862124  373797 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.862134  373797 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.862144  373797 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.862150  373797 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.862163  373797 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.862169  373797 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.862179  373797 system_pods.go:126] duration metric: took 3.483683ms to wait for k8s-apps to be running ...
	I1123 10:17:31.862188  373797 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:31.862236  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:31.880556  373797 system_svc.go:56] duration metric: took 18.357008ms WaitForService to wait for kubelet
	I1123 10:17:31.880607  373797 kubeadm.go:587] duration metric: took 4.408143491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:31.880631  373797 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:31.884219  373797 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:31.884253  373797 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:31.884271  373797 node_conditions.go:105] duration metric: took 3.634037ms to run NodePressure ...
	I1123 10:17:31.884287  373797 start.go:242] waiting for startup goroutines ...
	I1123 10:17:31.884299  373797 start.go:247] waiting for cluster config update ...
	I1123 10:17:31.884319  373797 start.go:256] writing updated cluster config ...
	I1123 10:17:31.884624  373797 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:31.889946  373797 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:31.894375  373797 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:33.901572  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:33.523784  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:35.846995  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:32.941081  371315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001868854s
	I1123 10:17:32.945152  371315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:17:32.945305  371315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1123 10:17:32.945433  371315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:17:32.945515  371315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:17:35.861865  371315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.916644987s
	I1123 10:17:36.776622  371315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.831435695s
	I1123 10:17:38.447477  371315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502246404s
	I1123 10:17:38.458614  371315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:17:38.467767  371315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:17:38.476049  371315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:17:38.476376  371315 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-772252 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:17:38.484454  371315 kubeadm.go:319] [bootstrap-token] Using token: 7c739u.zwt0bal8xrfj12xj
	W1123 10:17:34.916285  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:37.413216  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:36.400976  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:38.912096  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:38.485658  371315 out.go:252]   - Configuring RBAC rules ...
	I1123 10:17:38.485833  371315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:17:38.489646  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:17:38.494425  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:17:38.496889  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:17:38.499031  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:17:38.501264  371315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:17:38.853661  371315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:17:39.273659  371315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:17:39.853812  371315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:17:39.855808  371315 kubeadm.go:319] 
	I1123 10:17:39.855908  371315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:17:39.855921  371315 kubeadm.go:319] 
	I1123 10:17:39.856050  371315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:17:39.856060  371315 kubeadm.go:319] 
	I1123 10:17:39.856130  371315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:17:39.856198  371315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:17:39.856261  371315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:17:39.856271  371315 kubeadm.go:319] 
	I1123 10:17:39.856335  371315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:17:39.856340  371315 kubeadm.go:319] 
	I1123 10:17:39.856394  371315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:17:39.856399  371315 kubeadm.go:319] 
	I1123 10:17:39.856459  371315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:17:39.856552  371315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:17:39.856635  371315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:17:39.856644  371315 kubeadm.go:319] 
	I1123 10:17:39.856747  371315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:17:39.856841  371315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:17:39.856850  371315 kubeadm.go:319] 
	I1123 10:17:39.856946  371315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857068  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:17:39.857106  371315 kubeadm.go:319] 	--control-plane 
	I1123 10:17:39.857112  371315 kubeadm.go:319] 
	I1123 10:17:39.857223  371315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:17:39.857231  371315 kubeadm.go:319] 
	I1123 10:17:39.857360  371315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857522  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:17:39.861171  371315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:17:39.861361  371315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:17:39.861384  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:39.861392  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:39.863656  371315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:17:38.341179  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:40.341963  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:39.864757  371315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:17:39.869984  371315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:17:39.870008  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:17:39.886324  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:17:40.362280  371315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-772252 minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=default-k8s-diff-port-772252 minikube.k8s.io/primary=true
	I1123 10:17:40.379214  371315 ops.go:34] apiserver oom_adj: -16
	I1123 10:17:40.464921  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.965405  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.465003  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.965821  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:42.464950  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 10:17:39.414230  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.914196  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.400282  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:43.899909  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:42.965639  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.465528  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.965079  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.464998  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.965763  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:45.037128  371315 kubeadm.go:1114] duration metric: took 4.67480031s to wait for elevateKubeSystemPrivileges
	I1123 10:17:45.037171  371315 kubeadm.go:403] duration metric: took 19.073779602s to StartCluster
	I1123 10:17:45.037193  371315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.037267  371315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:45.039120  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.039419  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:17:45.039444  371315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:45.039520  371315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:45.039628  371315 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039656  371315 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039686  371315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772252"
	I1123 10:17:45.039661  371315 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.039720  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:45.039784  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.040159  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.040405  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.041405  371315 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:45.042675  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:45.064542  371315 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.064587  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.064919  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.065873  371315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:45.067076  371315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.067111  371315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:45.067169  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.085477  371315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.085507  371315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:45.086250  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.092224  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.114171  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.126365  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:17:45.189744  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:45.218033  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.235955  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.315901  371315 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:17:45.317142  371315 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:45.535405  371315 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:17:42.843988  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:45.342896  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:45.536493  371315 addons.go:530] duration metric: took 496.970486ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:17:45.820948  371315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-772252" context rescaled to 1 replicas
	W1123 10:17:47.319425  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:43.914556  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:46.414198  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:45.900010  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.900260  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.841815  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:50.341880  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:49.319741  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:51.320336  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:48.913341  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:51.412869  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:53.413536  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:50.400011  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:52.900077  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:53.913334  366730 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:17:53.913363  366730 pod_ready.go:86] duration metric: took 39.505598501s for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.916455  366730 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.920979  366730 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.921004  366730 pod_ready.go:86] duration metric: took 4.524758ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.923876  366730 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.928363  366730 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.928389  366730 pod_ready.go:86] duration metric: took 4.49134ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.931268  366730 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.111689  366730 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:17:54.111728  366730 pod_ready.go:86] duration metric: took 180.43869ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.312490  366730 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.711645  366730 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:17:54.711677  366730 pod_ready.go:86] duration metric: took 399.161367ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.912461  366730 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311759  366730 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:17:55.311784  366730 pod_ready.go:86] duration metric: took 399.295747ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311813  366730 pod_ready.go:40] duration metric: took 40.908845551s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:55.356075  366730 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:17:55.357834  366730 out.go:203] 
	W1123 10:17:55.359077  366730 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:17:55.360393  366730 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:17:55.361705  366730 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	W1123 10:17:52.841432  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:55.341775  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:57.341870  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:53.320896  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:55.820856  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	I1123 10:17:56.320034  371315 node_ready.go:49] node "default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:56.320062  371315 node_ready.go:38] duration metric: took 11.002894749s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:56.320077  371315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:56.320168  371315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:56.333026  371315 api_server.go:72] duration metric: took 11.293527033s to wait for apiserver process to appear ...
	I1123 10:17:56.333046  371315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:56.333064  371315 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1123 10:17:56.337320  371315 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1123 10:17:56.338383  371315 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:56.338411  371315 api_server.go:131] duration metric: took 5.357543ms to wait for apiserver health ...
	I1123 10:17:56.338423  371315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:56.342472  371315 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:56.342509  371315 system_pods.go:61] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.342517  371315 system_pods.go:61] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.342525  371315 system_pods.go:61] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.342531  371315 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.342538  371315 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.342542  371315 system_pods.go:61] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.342549  371315 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.342554  371315 system_pods.go:61] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.342565  371315 system_pods.go:74] duration metric: took 4.133412ms to wait for pod list to return data ...
	I1123 10:17:56.342577  371315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:56.344836  371315 default_sa.go:45] found service account: "default"
	I1123 10:17:56.344858  371315 default_sa.go:55] duration metric: took 2.273737ms for default service account to be created ...
	I1123 10:17:56.344868  371315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:56.347696  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.347728  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.347736  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.347744  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.347754  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.347760  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.347768  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.347773  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.347778  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.347800  371315 retry.go:31] will retry after 302.24178ms: missing components: kube-dns
	I1123 10:17:56.653773  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.653806  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.653815  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.653820  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.653830  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.653835  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.653840  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.653846  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.653851  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.653871  371315 retry.go:31] will retry after 265.267308ms: missing components: kube-dns
	I1123 10:17:56.923296  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.923348  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.923356  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.923382  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.923389  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.923401  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.923407  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.923412  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.923417  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.923434  371315 retry.go:31] will retry after 380.263968ms: missing components: kube-dns
	I1123 10:17:57.307510  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:57.307546  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Running
	I1123 10:17:57.307554  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:57.307562  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:57.307568  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:57.307572  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:57.307577  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:57.307581  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:57.307586  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:57.307596  371315 system_pods.go:126] duration metric: took 962.72072ms to wait for k8s-apps to be running ...
	I1123 10:17:57.307606  371315 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:57.307658  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:57.320972  371315 system_svc.go:56] duration metric: took 13.353924ms WaitForService to wait for kubelet
	I1123 10:17:57.321004  371315 kubeadm.go:587] duration metric: took 12.281511348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:57.321022  371315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:57.323660  371315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:57.323692  371315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:57.323712  371315 node_conditions.go:105] duration metric: took 2.684637ms to run NodePressure ...
	I1123 10:17:57.323726  371315 start.go:242] waiting for startup goroutines ...
	I1123 10:17:57.323742  371315 start.go:247] waiting for cluster config update ...
	I1123 10:17:57.323759  371315 start.go:256] writing updated cluster config ...
	I1123 10:17:57.324067  371315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:57.328141  371315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:57.331589  371315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.335257  371315 pod_ready.go:94] pod "coredns-66bc5c9577-c5c4c" is "Ready"
	I1123 10:17:57.335285  371315 pod_ready.go:86] duration metric: took 3.674367ms for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.337137  371315 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.341306  371315 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.341329  371315 pod_ready.go:86] duration metric: took 4.173911ms for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.343139  371315 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.346731  371315 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.346750  371315 pod_ready.go:86] duration metric: took 3.589943ms for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.348459  371315 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.732573  371315 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.732607  371315 pod_ready.go:86] duration metric: took 384.128293ms for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.932984  371315 pod_ready.go:83] waiting for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.331761  371315 pod_ready.go:94] pod "kube-proxy-xfghg" is "Ready"
	I1123 10:17:58.331788  371315 pod_ready.go:86] duration metric: took 398.77791ms for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.533376  371315 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932675  371315 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:58.932705  371315 pod_ready.go:86] duration metric: took 399.30371ms for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932717  371315 pod_ready.go:40] duration metric: took 1.604548656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:58.976709  371315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:17:58.978487  371315 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772252" cluster and "default" namespace by default
	W1123 10:17:55.399817  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:57.899557  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:59.840864  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:18:00.341361  371192 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:18:00.341391  371192 pod_ready.go:86] duration metric: took 36.00558292s for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.344015  371192 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.348659  371192 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:18:00.348689  371192 pod_ready.go:86] duration metric: took 4.650364ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.351238  371192 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.354817  371192 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:18:00.354840  371192 pod_ready.go:86] duration metric: took 3.5776ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.356850  371192 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.540127  371192 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:18:00.540160  371192 pod_ready.go:86] duration metric: took 183.289677ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.740192  371192 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.139411  371192 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:18:01.139439  371192 pod_ready.go:86] duration metric: took 399.218147ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.340436  371192 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740259  371192 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:18:01.740295  371192 pod_ready.go:86] duration metric: took 399.829885ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740307  371192 pod_ready.go:40] duration metric: took 37.410392677s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:01.788412  371192 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:01.791159  371192 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	W1123 10:18:00.399534  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:18:02.400234  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:18:02.899900  373797 pod_ready.go:94] pod "coredns-66bc5c9577-fxl7j" is "Ready"
	I1123 10:18:02.899931  373797 pod_ready.go:86] duration metric: took 31.005531566s for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.902103  373797 pod_ready.go:83] waiting for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.905655  373797 pod_ready.go:94] pod "etcd-embed-certs-412306" is "Ready"
	I1123 10:18:02.905688  373797 pod_ready.go:86] duration metric: took 3.561728ms for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.907483  373797 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.911179  373797 pod_ready.go:94] pod "kube-apiserver-embed-certs-412306" is "Ready"
	I1123 10:18:02.911205  373797 pod_ready.go:86] duration metric: took 3.701799ms for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.912993  373797 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.099021  373797 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412306" is "Ready"
	I1123 10:18:03.099054  373797 pod_ready.go:86] duration metric: took 186.04071ms for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.298482  373797 pod_ready.go:83] waiting for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.697866  373797 pod_ready.go:94] pod "kube-proxy-2vnjq" is "Ready"
	I1123 10:18:03.697900  373797 pod_ready.go:86] duration metric: took 399.390791ms for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.898175  373797 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298226  373797 pod_ready.go:94] pod "kube-scheduler-embed-certs-412306" is "Ready"
	I1123 10:18:04.298262  373797 pod_ready.go:86] duration metric: took 400.039787ms for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298279  373797 pod_ready.go:40] duration metric: took 32.408301003s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:04.344316  373797 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:04.346173  373797 out.go:179] * Done! kubectl is now configured to use "embed-certs-412306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.325300331Z" level=info msg="Created container ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6/kubernetes-dashboard" id=f9f20476-91ec-410b-bf0d-9f737f243302 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.326308967Z" level=info msg="Starting container: ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387" id=c40516c2-adfb-4096-9029-8d4b18bd58e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.328762005Z" level=info msg="Started container" PID=1727 containerID=ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6/kubernetes-dashboard id=c40516c2-adfb-4096-9029-8d4b18bd58e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1b402a615ce15cd50896f7a31664d779f1503cbc4c093744eedd8055d129f91
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.38624378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6a5a20c-5a2b-4bb6-86dc-5bb8f466f4e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.387156157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4dea1b1b-4006-4d5c-a603-075272002f0e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.388257273Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00dc5dcf-5004-43df-a8b8-fa06a1a3d0da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.388392657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392372509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392535809Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/364a7bd399a56469353e34c2a3024e985260161a2ec036c466fd751721d832af/merged/etc/passwd: no such file or directory"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392565464Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/364a7bd399a56469353e34c2a3024e985260161a2ec036c466fd751721d832af/merged/etc/group: no such file or directory"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392826582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.428333836Z" level=info msg="Created container 9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af: kube-system/storage-provisioner/storage-provisioner" id=00dc5dcf-5004-43df-a8b8-fa06a1a3d0da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.4289488Z" level=info msg="Starting container: 9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af" id=fd937c79-d3a7-437a-894b-36f81ab22368 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.43071366Z" level=info msg="Started container" PID=1750 containerID=9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af description=kube-system/storage-provisioner/storage-provisioner id=fd937c79-d3a7-437a-894b-36f81ab22368 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64dba340508095d478402b9079b1d6b5291174a1866c818346f19af2629b3cc2
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.269045599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=119ac1b8-f6ab-4390-a2b8-ceaa45552537 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.269927942Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a8ed94ab-5b2f-4c4b-b7c6-66a3db7af03c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.270950253Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=2543e1ba-ab94-4fc5-b05f-73c3ec5f2127 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.271106802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.276939758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.277566388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.313623713Z" level=info msg="Created container 23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=2543e1ba-ab94-4fc5-b05f-73c3ec5f2127 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.314141298Z" level=info msg="Starting container: 23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214" id=7990174e-f25d-4868-bc11-65fbe40c6f57 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.3192972Z" level=info msg="Started container" PID=1765 containerID=23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper id=7990174e-f25d-4868-bc11-65fbe40c6f57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63aae10b094b91f41f18467b9362839b528d5d307d551876eeb50d04b9ed8d09
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.405594899Z" level=info msg="Removing container: ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f" id=ed264d4c-e1e3-40bd-a0f8-567c5eb3db79 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.415664742Z" level=info msg="Removed container ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=ed264d4c-e1e3-40bd-a0f8-567c5eb3db79 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	23ccf4ce86c66       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   63aae10b094b9       dashboard-metrics-scraper-5f989dc9cf-bfhkn       kubernetes-dashboard
	9ccd16d74353c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   64dba34050809       storage-provisioner                              kube-system
	ffe2f07102353       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   c1b402a615ce1       kubernetes-dashboard-8694d4445c-fm8f6            kubernetes-dashboard
	d3e2f1261d87f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   d48425b59c112       busybox                                          default
	a66dd032f72a2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   6af5b2fceaabe       coredns-5dd5756b68-fsbfv                         kube-system
	7d2173a013595       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   4e20f159863aa       kube-proxy-99g4b                                 kube-system
	cbaeadd56435f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   9d011d0f754b6       kindnet-nz2m9                                    kube-system
	c6bd46fb7d986       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   64dba34050809       storage-provisioner                              kube-system
	556e97942a390       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   5af189241ebaf       kube-apiserver-old-k8s-version-990757            kube-system
	674b4af1a0427       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   b1ead877871ac       kube-controller-manager-old-k8s-version-990757   kube-system
	c9e0d8276aa07       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   8ed9b1c11741d       kube-scheduler-old-k8s-version-990757            kube-system
	ebac26e4ce8f3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   67782f4153cde       etcd-old-k8s-version-990757                      kube-system
	
	
	==> coredns [a66dd032f72a291c4b9137f10802d9fbf947163ac4ec744f05cff426d166d072] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34267 - 10079 "HINFO IN 3708039612012200968.3694113916681524421. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04671039s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-990757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-990757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-990757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-990757
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-990757
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                63027792-4520-472e-b216-dd92789c4530
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-fsbfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-990757                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-nz2m9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-990757             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-990757    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-99g4b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-990757             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-bfhkn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fm8f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-990757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-990757 event: Registered Node old-k8s-version-990757 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-990757 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-990757 event: Registered Node old-k8s-version-990757 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [ebac26e4ce8f31e1b8f09e6ec06a5c05e6707bb591cc39abd93e16c3ee829fcc] <==
	{"level":"info","ts":"2025-11-23T10:17:10.82653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:17:10.826561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T10:17:10.826683Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T10:17:10.826803Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:17:10.826841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:17:10.830075Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:17:10.830176Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:17:10.830239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:17:10.830444Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:17:10.830482Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:17:11.916364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.917239Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-990757 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:17:11.917243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:17:11.917259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:17:11.917532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:17:11.917585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:17:11.918702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:17:11.918714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:17:18.379883Z","caller":"traceutil/trace.go:171","msg":"trace[961230357] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"166.560111ms","start":"2025-11-23T10:17:18.213304Z","end":"2025-11-23T10:17:18.379864Z","steps":["trace[961230357] 'process raft request'  (duration: 126.430146ms)","trace[961230357] 'compare'  (duration: 39.999941ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:18:09 up  3:00,  0 user,  load average: 4.58, 5.00, 2.98
	Linux old-k8s-version-990757 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cbaeadd56435f3be2e882ca71a5e4c2a576610a12fea8a213be3214b68289f60] <==
	I1123 10:17:13.908674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:13.908924       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:17:13.910907       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:13.910936       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:13.910972       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:14.207283       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:14.207432       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:14.207475       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:14.208344       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:14.407726       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:14.407757       1 metrics.go:72] Registering metrics
	I1123 10:17:14.407821       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:24.208531       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:24.208621       1 main.go:301] handling current node
	I1123 10:17:34.207482       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:34.207535       1 main.go:301] handling current node
	I1123 10:17:44.208016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:44.208051       1 main.go:301] handling current node
	I1123 10:17:54.207433       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:54.207481       1 main.go:301] handling current node
	I1123 10:18:04.214177       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:18:04.214247       1 main.go:301] handling current node
	
	
	==> kube-apiserver [556e97942a390024b57d00ce6d2dab22e5234986f456ccd01a8426510bf12dc2] <==
	I1123 10:17:12.977851       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1123 10:17:13.059875       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 10:17:13.071566       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 10:17:13.071671       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 10:17:13.076565       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:17:13.076723       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:17:13.076837       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:17:13.077492       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:17:13.077582       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:17:13.077614       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:17:13.077641       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:17:13.077865       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 10:17:13.077867       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:13.123547       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:17:13.975696       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:14.222220       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:17:14.256962       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:17:14.275372       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:14.285329       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:14.293713       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:17:14.341030       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.245.16"}
	I1123 10:17:14.360372       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.250.173"}
	I1123 10:17:25.734220       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:17:25.746892       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:17:25.766358       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [674b4af1a0427bfaca38a9f2c3d8e894dc1b8e4c4bdb0b56c34b4ab06cffe9a1] <==
	I1123 10:17:25.782711       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1123 10:17:25.782748       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1123 10:17:25.787885       1 shared_informer.go:318] Caches are synced for service account
	I1123 10:17:25.794845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.894295ms"
	I1123 10:17:25.798752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.524651ms"
	I1123 10:17:25.798971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.366µs"
	I1123 10:17:25.808007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.088266ms"
	I1123 10:17:25.808694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.383µs"
	I1123 10:17:25.809015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.062µs"
	I1123 10:17:25.821926       1 shared_informer.go:318] Caches are synced for stateful set
	I1123 10:17:25.863922       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:17:25.871939       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 10:17:25.923020       1 shared_informer.go:318] Caches are synced for persistent volume
	I1123 10:17:26.283284       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:17:26.340077       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:17:26.340137       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:17:30.362547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.809µs"
	I1123 10:17:31.368665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.643µs"
	I1123 10:17:32.461533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.571µs"
	I1123 10:17:34.390971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.180487ms"
	I1123 10:17:34.391082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.22µs"
	I1123 10:17:50.416147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.857µs"
	I1123 10:17:53.848372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.166239ms"
	I1123 10:17:53.848503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.752µs"
	I1123 10:17:56.083544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.025µs"
	
	
	==> kube-proxy [7d2173a013595020de9a41e415a6a98ae7dc0077b210812ebda0b0af5473a287] <==
	I1123 10:17:13.771082       1 server_others.go:69] "Using iptables proxy"
	I1123 10:17:13.793895       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:17:13.835778       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:13.841865       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:17:13.841933       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:17:13.841943       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:17:13.842005       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:17:13.842927       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:17:13.843129       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:13.843889       1 config.go:315] "Starting node config controller"
	I1123 10:17:13.843966       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:17:13.844511       1 config.go:188] "Starting service config controller"
	I1123 10:17:13.844539       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:17:13.844565       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:17:13.844569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:17:13.944177       1 shared_informer.go:318] Caches are synced for node config
	I1123 10:17:13.945419       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:17:13.945521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c9e0d8276aa071eee136baabda6e6268adcd34c9a47ea98e77308ea23679b766] <==
	I1123 10:17:11.439821       1 serving.go:348] Generated self-signed cert in-memory
	W1123 10:17:13.010486       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:17:13.010546       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:17:13.010561       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:17:13.010570       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:17:13.039713       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 10:17:13.039764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:13.041749       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:13.041787       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 10:17:13.044231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	W1123 10:17:13.049879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:17:13.049944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1123 10:17:13.044324       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 10:17:13.142959       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897393     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcnl\" (UniqueName: \"kubernetes.io/projected/ef986112-2b84-4018-a524-06c1bd693ed4-kube-api-access-vwcnl\") pod \"kubernetes-dashboard-8694d4445c-fm8f6\" (UID: \"ef986112-2b84-4018-a524-06c1bd693ed4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897449     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4rx\" (UniqueName: \"kubernetes.io/projected/ab90c537-1023-4768-8724-1bd443811215-kube-api-access-gc4rx\") pod \"dashboard-metrics-scraper-5f989dc9cf-bfhkn\" (UID: \"ab90c537-1023-4768-8724-1bd443811215\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897470     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab90c537-1023-4768-8724-1bd443811215-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-bfhkn\" (UID: \"ab90c537-1023-4768-8724-1bd443811215\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897497     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ef986112-2b84-4018-a524-06c1bd693ed4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fm8f6\" (UID: \"ef986112-2b84-4018-a524-06c1bd693ed4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6"
	Nov 23 10:17:30 old-k8s-version-990757 kubelet[736]: I1123 10:17:30.343003     736 scope.go:117] "RemoveContainer" containerID="0637351a00d8d7d37ed69f59533ec14ce1fcf7142851c8a2844018d2fd3dee5b"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: I1123 10:17:31.348696     736 scope.go:117] "RemoveContainer" containerID="0637351a00d8d7d37ed69f59533ec14ce1fcf7142851c8a2844018d2fd3dee5b"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: I1123 10:17:31.348998     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: E1123 10:17:31.350795     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:32 old-k8s-version-990757 kubelet[736]: I1123 10:17:32.354052     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:32 old-k8s-version-990757 kubelet[736]: E1123 10:17:32.354507     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:34 old-k8s-version-990757 kubelet[736]: I1123 10:17:34.379799     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6" podStartSLOduration=1.225950688 podCreationTimestamp="2025-11-23 10:17:25 +0000 UTC" firstStartedPulling="2025-11-23 10:17:26.111594433 +0000 UTC m=+15.948173439" lastFinishedPulling="2025-11-23 10:17:34.265375983 +0000 UTC m=+24.101954990" observedRunningTime="2025-11-23 10:17:34.377994742 +0000 UTC m=+24.214573752" watchObservedRunningTime="2025-11-23 10:17:34.379732239 +0000 UTC m=+24.216311250"
	Nov 23 10:17:36 old-k8s-version-990757 kubelet[736]: I1123 10:17:36.072809     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:36 old-k8s-version-990757 kubelet[736]: E1123 10:17:36.073296     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:44 old-k8s-version-990757 kubelet[736]: I1123 10:17:44.385679     736 scope.go:117] "RemoveContainer" containerID="c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.268516     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.404399     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.404703     736 scope.go:117] "RemoveContainer" containerID="23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: E1123 10:17:50.405062     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:56 old-k8s-version-990757 kubelet[736]: I1123 10:17:56.071759     736 scope.go:117] "RemoveContainer" containerID="23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	Nov 23 10:17:56 old-k8s-version-990757 kubelet[736]: E1123 10:17:56.072204     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:18:07 old-k8s-version-990757 kubelet[736]: I1123 10:18:07.501752     736 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: kubelet.service: Consumed 1.667s CPU time.
	
	
	==> kubernetes-dashboard [ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387] <==
	2025/11/23 10:17:34 Starting overwatch
	2025/11/23 10:17:34 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:34 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:34 Using secret token for csrf signing
	2025/11/23 10:17:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:34 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 10:17:34 Generating JWE encryption key
	2025/11/23 10:17:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:34 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:34 Creating in-cluster Sidecar client
	2025/11/23 10:17:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:34 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af] <==
	I1123 10:17:44.443476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:17:44.451494       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:17:44.451543       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:18:01.848951       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:01.849052       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35efb046-0c13-4b37-bd0a-2155a92525f0", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35 became leader
	I1123 10:18:01.849127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35!
	I1123 10:18:01.949403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35!
	
	
	==> storage-provisioner [c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f] <==
	I1123 10:17:13.722967       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:17:43.725573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-990757 -n old-k8s-version-990757: exit status 2 (370.440854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-990757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-990757
helpers_test.go:243: (dbg) docker inspect old-k8s-version-990757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	        "Created": "2025-11-23T10:15:48.885853944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:03.88077584Z",
	            "FinishedAt": "2025-11-23T10:17:02.949192527Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0/fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0-json.log",
	        "Name": "/old-k8s-version-990757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-990757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-990757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd35c6e2de37eeafffc0c894be730f01c526a52c707a28062e20151e44ba2fa0",
	                "LowerDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2ee0c3fffb58f362d6769aa6722dd8802b1b1ff1dbb3e5e659525bd269aeedd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-990757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-990757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-990757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-990757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04c3e56e5f77c804f160ce18ac68cf438f5dbeb62ac14c22e2394d80dc4c3c0b",
	            "SandboxKey": "/var/run/docker/netns/04c3e56e5f77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-990757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "052388d40ecf9cf5a4a04b634ec5fc574a97435df4a8b65c1a426a6b8091971d",
	                    "EndpointID": "bd29407e3a0ea6f19bf8b2c1821256775e648599e2d867de641f0af82c1a561d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "f2:ee:64:b1:09:8c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-990757",
	                        "fd35c6e2de37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757: exit status 2 (354.389686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-990757 logs -n 25: (1.179790779s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-791161 sudo docker system info                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cri-dockerd --version                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                        │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                          │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                          │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:17:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:17:19.609492  373797 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:17:19.609729  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609737  373797 out.go:374] Setting ErrFile to fd 2...
	I1123 10:17:19.609741  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609928  373797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:17:19.610361  373797 out.go:368] Setting JSON to false
	I1123 10:17:19.611590  373797 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10781,"bootTime":1763882259,"procs":496,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:17:19.611646  373797 start.go:143] virtualization: kvm guest
	I1123 10:17:19.613670  373797 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:17:19.614888  373797 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:17:19.614881  373797 notify.go:221] Checking for updates...
	I1123 10:17:19.616064  373797 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:17:19.617045  373797 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:19.617927  373797 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:17:19.618967  373797 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:17:19.619935  373797 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:17:19.621299  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:19.621911  373797 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:17:19.648614  373797 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:17:19.648746  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.710021  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.699419611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.710161  373797 docker.go:319] overlay module found
	I1123 10:17:19.712107  373797 out.go:179] * Using the docker driver based on existing profile
	I1123 10:17:19.713258  373797 start.go:309] selected driver: docker
	I1123 10:17:19.713275  373797 start.go:927] validating driver "docker" against &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.713374  373797 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:17:19.713898  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.779691  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.765216478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.779989  373797 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:19.780023  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:19.780080  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:19.780271  373797 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.782420  373797 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:17:19.783638  373797 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:17:19.785045  373797 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:17:19.786269  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:19.786307  373797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:17:19.786316  373797 cache.go:65] Caching tarball of preloaded images
	I1123 10:17:19.786372  373797 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:17:19.786421  373797 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:17:19.786437  373797 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:17:19.786558  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:19.811595  373797 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:17:19.811627  373797 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:17:19.811673  373797 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:17:19.811717  373797 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:17:19.811792  373797 start.go:364] duration metric: took 48.053µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:17:19.811817  373797 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:17:19.811827  373797 fix.go:54] fixHost starting: 
	I1123 10:17:19.812155  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:19.832074  373797 fix.go:112] recreateIfNeeded on embed-certs-412306: state=Stopped err=<nil>
	W1123 10:17:19.832132  373797 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:17:18.495023  371192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:18.495055  371192 machine.go:97] duration metric: took 5.084691596s to provisionDockerMachine
	I1123 10:17:18.495069  371192 start.go:293] postStartSetup for "no-preload-541522" (driver="docker")
	I1123 10:17:18.495082  371192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:18.495215  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:18.495278  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.522688  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.634392  371192 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:18.638904  371192 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:18.638946  371192 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:18.638961  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:18.639015  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:18.639129  371192 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:18.639289  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:18.650865  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:18.676275  371192 start.go:296] duration metric: took 181.188377ms for postStartSetup
	I1123 10:17:18.676398  371192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:18.676447  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.696551  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.798813  371192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:18.804200  371192 fix.go:56] duration metric: took 5.847399025s for fixHost
	I1123 10:17:18.804227  371192 start.go:83] releasing machines lock for "no-preload-541522", held for 5.847449946s
	I1123 10:17:18.804314  371192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-541522
	I1123 10:17:18.823965  371192 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:18.824026  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.824050  371192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:18.824151  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.846278  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.847666  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:19.015957  371192 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:19.023883  371192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:19.072321  371192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:19.078795  371192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:19.078868  371192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:19.088538  371192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:19.088566  371192 start.go:496] detecting cgroup driver to use...
	I1123 10:17:19.088600  371192 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:19.088643  371192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:19.110539  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:19.132949  371192 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:19.133028  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:19.150165  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:19.165619  371192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:19.271465  371192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:19.379873  371192 docker.go:234] disabling docker service ...
	I1123 10:17:19.379932  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:19.398139  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:19.412992  371192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:19.503640  371192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:19.600343  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:19.613822  371192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:19.629382  371192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:19.629446  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.640465  371192 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:19.640529  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.651535  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.661697  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.674338  371192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:19.684964  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.697156  371192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.707055  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.717460  371192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:19.725865  371192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:19.736523  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:19.829013  371192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:19.984026  371192 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:19.984148  371192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:19.988801  371192 start.go:564] Will wait 60s for crictl version
	I1123 10:17:19.988866  371192 ssh_runner.go:195] Run: which crictl
	I1123 10:17:19.993024  371192 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:20.026159  371192 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:20.026262  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.057945  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.092537  371192 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:20.095052  371192 cli_runner.go:164] Run: docker network inspect no-preload-541522 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:20.113293  371192 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:20.117900  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.129916  371192 kubeadm.go:884] updating cluster {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:20.130038  371192 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:20.130098  371192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:20.168390  371192 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:20.168418  371192 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:20.168427  371192 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:20.168553  371192 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-541522 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:20.168646  371192 ssh_runner.go:195] Run: crio config
	I1123 10:17:20.221690  371192 cni.go:84] Creating CNI manager for ""
	I1123 10:17:20.221718  371192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:20.221739  371192 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:20.221769  371192 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-541522 NodeName:no-preload-541522 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:20.221955  371192 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-541522"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:20.222044  371192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:20.231152  371192 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:20.231287  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:20.240306  371192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:17:20.253726  371192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:20.268663  371192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 10:17:20.286013  371192 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:20.290286  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.301340  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:20.405447  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:20.425508  371192 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522 for IP: 192.168.85.2
	I1123 10:17:20.425698  371192 certs.go:195] generating shared ca certs ...
	I1123 10:17:20.425746  371192 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:20.425993  371192 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:20.426072  371192 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:20.426083  371192 certs.go:257] generating profile certs ...
	I1123 10:17:20.426244  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/client.key
	I1123 10:17:20.426355  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key.29b5f89d
	I1123 10:17:20.426438  371192 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key
	I1123 10:17:20.426605  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:20.426644  371192 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:20.426655  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:20.426693  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:20.426725  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:20.426756  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:20.426822  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:20.428032  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:20.456018  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:20.479658  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:20.501657  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:20.529181  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:17:20.550509  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:20.569511  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:20.588713  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:20.606754  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:20.625365  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:20.644697  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:20.662851  371192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:20.675998  371192 ssh_runner.go:195] Run: openssl version
	I1123 10:17:20.682347  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:20.691464  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695411  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695463  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.730632  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:20.739401  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:20.748466  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752659  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752735  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.788588  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:20.797604  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:20.806894  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811228  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811284  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.846328  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:20.855328  371192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:20.859478  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:20.893578  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:20.929466  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:20.977899  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:21.020876  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:21.070653  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:21.123318  371192 kubeadm.go:401] StartCluster: {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:21.123410  371192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:21.123464  371192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:21.157433  371192 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:17:21.157457  371192 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:17:21.157464  371192 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:17:21.157469  371192 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:17:21.157473  371192 cri.go:89] found id: ""
	I1123 10:17:21.157519  371192 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:21.170853  371192 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:21.170942  371192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:21.179761  371192 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:21.179782  371192 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:21.179832  371192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:21.188635  371192 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:21.189189  371192 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-541522" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.189463  371192 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-541522" cluster setting kubeconfig missing "no-preload-541522" context setting]
	I1123 10:17:21.190011  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.191382  371192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:21.200134  371192 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:17:21.200165  371192 kubeadm.go:602] duration metric: took 20.377182ms to restartPrimaryControlPlane
	I1123 10:17:21.200176  371192 kubeadm.go:403] duration metric: took 76.869746ms to StartCluster
	I1123 10:17:21.200197  371192 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.200268  371192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.201522  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.201810  371192 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:21.201858  371192 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:21.201968  371192 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:17:21.201995  371192 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	W1123 10:17:21.202008  371192 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:21.202006  371192 addons.go:70] Setting dashboard=true in profile "no-preload-541522"
	I1123 10:17:21.202029  371192 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:17:21.202053  371192 addons.go:239] Setting addon dashboard=true in "no-preload-541522"
	I1123 10:17:21.202055  371192 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	W1123 10:17:21.202063  371192 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:21.202081  371192 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:21.202038  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202110  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202447  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202598  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202660  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.204706  371192 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:21.206052  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:21.227863  371192 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	W1123 10:17:21.227926  371192 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:21.227956  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.228549  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.232585  371192 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:21.232585  371192 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:21.233696  371192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.233729  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:21.233799  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.233705  371192 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:21.234809  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:21.234828  371192 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:21.234890  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.265221  371192 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.265260  371192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:21.265326  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.274943  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.276965  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.296189  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.367731  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:21.382397  371192 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:21.398915  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.401528  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:21.401552  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:21.419867  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:21.419897  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:21.422575  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.439431  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:21.439464  371192 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:21.459190  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:21.459215  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:21.474803  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:21.474837  371192 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:21.490492  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:21.490520  371192 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:21.504992  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:21.505017  371192 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:21.519429  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:21.519456  371192 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:21.533295  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:21.533322  371192 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:21.550435  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:18.396407  371315 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.434126085s)
	I1123 10:17:18.396438  371315 kic.go:203] duration metric: took 4.434295488s to extract preloaded images to volume ...
	W1123 10:17:18.396521  371315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:17:18.396560  371315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:17:18.396604  371315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:17:18.463256  371315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-772252 --name default-k8s-diff-port-772252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --network default-k8s-diff-port-772252 --ip 192.168.103.2 --volume default-k8s-diff-port-772252:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:17:18.796638  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Running}}
	I1123 10:17:18.816868  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:18.840858  371315 cli_runner.go:164] Run: docker exec default-k8s-diff-port-772252 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:17:18.897619  371315 oci.go:144] the created container "default-k8s-diff-port-772252" has a running status.
	I1123 10:17:18.897661  371315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa...
	I1123 10:17:18.977365  371315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:17:19.006386  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.030565  371315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:17:19.030591  371315 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-772252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:17:19.079641  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.103668  371315 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:19.103794  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:19.133387  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:19.134363  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:19.134412  371315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:19.135234  371315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54846->127.0.0.1:33113: read: connection reset by peer
	I1123 10:17:22.290470  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.290505  371315 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772252"
	I1123 10:17:22.290581  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.310197  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.310489  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.310506  371315 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772252 && echo "default-k8s-diff-port-772252" | sudo tee /etc/hostname
	I1123 10:17:22.471190  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.471288  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.491303  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.491559  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.491595  371315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:22.649053  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:22.649118  371315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:22.649148  371315 ubuntu.go:190] setting up certificates
	I1123 10:17:22.649175  371315 provision.go:84] configureAuth start
	I1123 10:17:22.649268  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:22.670533  371315 provision.go:143] copyHostCerts
	I1123 10:17:22.670621  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:22.670640  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:22.670723  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:22.670844  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:22.670855  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:22.670899  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:22.671009  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:22.671020  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:22.671063  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:22.671173  371315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772252 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-772252 localhost minikube]
	I1123 10:17:22.781341  371315 provision.go:177] copyRemoteCerts
	I1123 10:17:22.781420  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:22.781468  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.813351  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:22.707516  371192 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:17:22.707555  371192 node_ready.go:38] duration metric: took 1.325107134s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:22.707572  371192 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:22.707865  371192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:23.284024  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.885050693s)
	I1123 10:17:23.284105  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861477632s)
	I1123 10:17:23.284235  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.733760656s)
	I1123 10:17:23.284398  371192 api_server.go:72] duration metric: took 2.082551658s to wait for apiserver process to appear ...
	I1123 10:17:23.284414  371192 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:23.284434  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.286130  371192 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-541522 addons enable metrics-server
	
	I1123 10:17:23.289610  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.289631  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:23.292533  371192 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1123 10:17:20.914139  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:22.914473  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:19.834110  373797 out.go:252] * Restarting existing docker container for "embed-certs-412306" ...
	I1123 10:17:19.834184  373797 cli_runner.go:164] Run: docker start embed-certs-412306
	I1123 10:17:20.130659  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:20.150941  373797 kic.go:430] container "embed-certs-412306" state is running.
	I1123 10:17:20.151437  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:20.172969  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:20.173319  373797 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:20.173400  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:20.193884  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:20.194212  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:20.194231  373797 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:20.195045  373797 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48678->127.0.0.1:33118: read: connection reset by peer
	I1123 10:17:23.348386  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.348432  373797 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:17:23.348510  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.369008  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.369294  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.369309  373797 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:17:23.527808  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.527905  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.552954  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.553243  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.553263  373797 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:23.705470  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:23.705501  373797 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:23.705547  373797 ubuntu.go:190] setting up certificates
	I1123 10:17:23.705570  373797 provision.go:84] configureAuth start
	I1123 10:17:23.705648  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:23.727746  373797 provision.go:143] copyHostCerts
	I1123 10:17:23.727819  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:23.727834  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:23.727904  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:23.728152  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:23.728170  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:23.728229  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:23.728394  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:23.728408  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:23.728442  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:23.728545  373797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:17:23.786003  373797 provision.go:177] copyRemoteCerts
	I1123 10:17:23.786110  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:23.786168  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.808607  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:23.930337  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:23.954195  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:17:23.973335  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1123 10:17:23.992599  373797 provision.go:87] duration metric: took 287.009489ms to configureAuth
	I1123 10:17:23.992633  373797 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:23.992827  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:23.992947  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.015952  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.016359  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:24.016396  373797 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:24.382671  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:24.382710  373797 machine.go:97] duration metric: took 4.209367018s to provisionDockerMachine
	I1123 10:17:24.382728  373797 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:17:24.382754  373797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:24.382834  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:24.382885  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.404505  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.511869  373797 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:24.516166  373797 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:24.516207  373797 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:24.516222  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:24.516280  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:24.516393  373797 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:24.516518  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:24.524244  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:24.542545  373797 start.go:296] duration metric: took 159.79015ms for postStartSetup
	I1123 10:17:24.542619  373797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:24.542668  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.563717  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:22.926511  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:22.950745  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:17:22.971167  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:17:22.992406  371315 provision.go:87] duration metric: took 343.209444ms to configureAuth
	I1123 10:17:22.992440  371315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:22.992638  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:22.992764  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.015449  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.015746  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:23.015770  371315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:23.334757  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:23.334787  371315 machine.go:97] duration metric: took 4.23109286s to provisionDockerMachine
	I1123 10:17:23.334800  371315 client.go:176] duration metric: took 10.163153814s to LocalClient.Create
	I1123 10:17:23.334826  371315 start.go:167] duration metric: took 10.163248519s to libmachine.API.Create "default-k8s-diff-port-772252"
	I1123 10:17:23.334840  371315 start.go:293] postStartSetup for "default-k8s-diff-port-772252" (driver="docker")
	I1123 10:17:23.334860  371315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:23.334929  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:23.334985  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.356328  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.463374  371315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:23.467492  371315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:23.467528  371315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:23.467542  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:23.467604  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:23.467697  371315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:23.467820  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:23.475956  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:23.497077  371315 start.go:296] duration metric: took 162.21628ms for postStartSetup
	I1123 10:17:23.497453  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.517994  371315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/config.json ...
	I1123 10:17:23.518317  371315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:23.518376  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.544356  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.649434  371315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:23.654312  371315 start.go:128] duration metric: took 10.487060831s to createHost
	I1123 10:17:23.654340  371315 start.go:83] releasing machines lock for "default-k8s-diff-port-772252", held for 10.487196123s
	I1123 10:17:23.654429  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.672341  371315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:23.672366  371315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:23.672402  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.672450  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.692134  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.692271  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.884469  371315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:23.894358  371315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:23.951450  371315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:23.956897  371315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:23.956984  371315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:23.983807  371315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:17:23.983830  371315 start.go:496] detecting cgroup driver to use...
	I1123 10:17:23.983859  371315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:23.983898  371315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.001497  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.017078  371315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.017175  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.033394  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:24.052236  371315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:24.146681  371315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:24.245622  371315 docker.go:234] disabling docker service ...
	I1123 10:17:24.245695  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:24.267262  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:24.283984  371315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:24.393614  371315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:24.485577  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:24.498373  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:24.513700  371315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:24.513745  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.524969  371315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:24.525040  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.534062  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.543449  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.552383  371315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:24.562139  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.572184  371315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.587719  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.597575  371315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:24.606824  371315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:24.615535  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:24.700246  371315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:24.855040  371315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:24.855123  371315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:24.859368  371315 start.go:564] Will wait 60s for crictl version
	I1123 10:17:24.859428  371315 ssh_runner.go:195] Run: which crictl
	I1123 10:17:24.863070  371315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:24.889521  371315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:24.889599  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.920115  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.954417  371315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.666037  373797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:24.670358  373797 fix.go:56] duration metric: took 4.858524746s for fixHost
	I1123 10:17:24.670382  373797 start.go:83] releasing machines lock for "embed-certs-412306", held for 4.858576755s
	I1123 10:17:24.670445  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:24.688334  373797 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:24.688391  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.688402  373797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:24.688482  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.708037  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.709542  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.881767  373797 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:24.889568  373797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:24.928028  373797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:24.933463  373797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:24.933545  373797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:24.944053  373797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:24.944096  373797 start.go:496] detecting cgroup driver to use...
	I1123 10:17:24.944134  373797 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:24.944176  373797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.961024  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.975672  373797 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.975755  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.992860  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:25.007660  373797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:25.101571  373797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:25.187706  373797 docker.go:234] disabling docker service ...
	I1123 10:17:25.187771  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:25.203871  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:25.220342  373797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:25.310358  373797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:25.403221  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:25.417018  373797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:25.431507  373797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:25.431564  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.441415  373797 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:25.441481  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.450871  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.459923  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.468817  373797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:25.477361  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.487848  373797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.496857  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.506275  373797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:25.514119  373797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:25.522214  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.609285  373797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:25.788628  373797 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:25.788710  373797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:25.794577  373797 start.go:564] Will wait 60s for crictl version
	I1123 10:17:25.794647  373797 ssh_runner.go:195] Run: which crictl
	I1123 10:17:25.801054  373797 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:25.830537  373797 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:25.830618  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.862137  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.896309  373797 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.955476  371315 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:24.975771  371315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:24.980312  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:24.992335  371315 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:24.992470  371315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:24.992532  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.028422  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.028446  371315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.028507  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.062707  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.062731  371315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.062740  371315 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1123 10:17:25.062842  371315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.062921  371315 ssh_runner.go:195] Run: crio config
	I1123 10:17:25.111817  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:25.111854  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:25.111873  371315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:25.111897  371315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772252 NodeName:default-k8s-diff-port-772252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:25.112030  371315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:25.112105  371315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:25.120360  371315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:25.120421  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:25.129795  371315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1123 10:17:25.145251  371315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:25.160692  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1123 10:17:25.173307  371315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:25.177001  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.187493  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.282599  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:25.306664  371315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252 for IP: 192.168.103.2
	I1123 10:17:25.306684  371315 certs.go:195] generating shared ca certs ...
	I1123 10:17:25.306700  371315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.306864  371315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:25.306920  371315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:25.306934  371315 certs.go:257] generating profile certs ...
	I1123 10:17:25.307023  371315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key
	I1123 10:17:25.307042  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt with IP's: []
	I1123 10:17:25.369960  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt ...
	I1123 10:17:25.369988  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt: {Name:mk7f4719b240e51f803a30c22478d2cf1d0e1199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370175  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key ...
	I1123 10:17:25.370199  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key: {Name:mkd811194a7ece5d786aacc912a42bc560ea4296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370292  371315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1
	I1123 10:17:25.370312  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 10:17:25.423997  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 ...
	I1123 10:17:25.424030  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1: {Name:mk6de12f0748b003728065f4169ec8bcc4410f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424186  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 ...
	I1123 10:17:25.424201  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1: {Name:mkfeca4687eb3d49033d88eae184a2c0e40ab44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424294  371315 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt
	I1123 10:17:25.424406  371315 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key
	I1123 10:17:25.424489  371315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key
	I1123 10:17:25.424508  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt with IP's: []
	I1123 10:17:25.484984  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt ...
	I1123 10:17:25.485010  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt: {Name:mkc9c6bf8ac400416e9eb1893c09433f60578057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485213  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key ...
	I1123 10:17:25.485235  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key: {Name:mk504063bf5acfe6751f65cfaba17411b52827e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485488  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:25.485543  371315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:25.485559  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:25.485600  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:25.485631  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:25.485652  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:25.485702  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:25.486510  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:25.505646  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:25.524124  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:25.543811  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:25.568526  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:17:25.588007  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:25.606546  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:25.626591  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:17:25.647854  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:25.673928  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:25.698071  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:25.717953  371315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:25.733564  371315 ssh_runner.go:195] Run: openssl version
	I1123 10:17:25.743071  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:25.755937  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762383  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762464  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.817928  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:25.829386  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:25.840669  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845206  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845259  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.884816  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:25.895209  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:25.905009  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909147  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909212  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.947660  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:25.958547  371315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:25.963329  371315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:17:25.963400  371315 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:25.963515  371315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:25.963592  371315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:25.994552  371315 cri.go:89] found id: ""
	I1123 10:17:25.994632  371315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:26.004720  371315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:17:26.014394  371315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:17:26.014465  371315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:17:26.023894  371315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:17:26.023927  371315 kubeadm.go:158] found existing configuration files:
	
	I1123 10:17:26.023984  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 10:17:26.032407  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:17:26.032468  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:17:26.041623  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 10:17:26.054201  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:17:26.054261  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:17:26.066701  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.079955  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:17:26.080191  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.093784  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 10:17:26.105549  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:17:26.105617  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:17:26.115532  371315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:17:26.160623  371315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:17:26.160969  371315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:17:26.186117  371315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:17:26.186236  371315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:17:26.186285  371315 kubeadm.go:319] OS: Linux
	I1123 10:17:26.186354  371315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:17:26.186447  371315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:17:26.186539  371315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:17:26.186616  371315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:17:26.186682  371315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:17:26.186746  371315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:17:26.186824  371315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:17:26.186884  371315 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:17:26.263125  371315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:17:26.263295  371315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:17:26.263483  371315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:17:26.272376  371315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:17:25.897306  373797 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:25.917131  373797 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:25.921503  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.932797  373797 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:25.932962  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:25.933022  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.971485  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.971507  373797 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.971565  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.998401  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.998430  373797 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.998439  373797 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:25.998565  373797 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.998651  373797 ssh_runner.go:195] Run: crio config
	I1123 10:17:26.054182  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:26.054212  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:26.054230  373797 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:26.054261  373797 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:26.054449  373797 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:26.054528  373797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:26.069247  373797 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:26.069315  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:26.084536  373797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:17:26.105237  373797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:26.122042  373797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:17:26.135463  373797 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:26.139894  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:26.152470  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:26.259400  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:26.293349  373797 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:17:26.293376  373797 certs.go:195] generating shared ca certs ...
	I1123 10:17:26.293398  373797 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:26.293563  373797 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:26.293621  373797 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:26.293631  373797 certs.go:257] generating profile certs ...
	I1123 10:17:26.293719  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:17:26.293765  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:17:26.293798  373797 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:17:26.293962  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:26.294032  373797 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:26.294043  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:26.294080  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:26.294150  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:26.294182  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:26.294239  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:26.295078  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:26.319354  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:26.346624  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:26.375357  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:26.408580  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:17:26.438245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:17:26.463452  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:26.491192  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:26.535358  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:26.564257  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:26.589245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:26.615973  373797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:26.634980  373797 ssh_runner.go:195] Run: openssl version
	I1123 10:17:26.643923  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:26.658008  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663894  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663963  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.725019  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:26.741335  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:26.754306  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760205  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760289  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.817066  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:26.828242  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:26.840286  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845608  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845667  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.907823  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:26.920712  373797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:26.926906  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:26.993735  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:27.067117  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:27.144625  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:27.218572  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:27.280794  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:27.347949  373797 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:27.348439  373797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:27.348547  373797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:27.395884  373797 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:17:27.395917  373797 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:17:27.395924  373797 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:17:27.395929  373797 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:17:27.395933  373797 cri.go:89] found id: ""
	I1123 10:17:27.395979  373797 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:27.419845  373797 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:27Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:27.419963  373797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:27.439378  373797 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:27.439398  373797 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:27.439448  373797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:27.451084  373797 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:27.451946  373797 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-412306" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.452494  373797 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-412306" cluster setting kubeconfig missing "embed-certs-412306" context setting]
	I1123 10:17:27.453585  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.455654  373797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:27.467125  373797 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 10:17:27.467282  373797 kubeadm.go:602] duration metric: took 27.876451ms to restartPrimaryControlPlane
	I1123 10:17:27.467296  373797 kubeadm.go:403] duration metric: took 119.360738ms to StartCluster
	I1123 10:17:27.467315  373797 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.467483  373797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.469463  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.470000  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:27.470115  373797 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:27.470204  373797 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:17:27.470221  373797 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	W1123 10:17:27.470228  373797 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:27.470273  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.470801  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471054  373797 addons.go:70] Setting dashboard=true in profile "embed-certs-412306"
	I1123 10:17:27.471072  373797 addons.go:239] Setting addon dashboard=true in "embed-certs-412306"
	W1123 10:17:27.471080  373797 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:27.471255  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.471727  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471889  373797 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:17:27.471907  373797 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:17:27.472219  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.472422  373797 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:27.474200  373797 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:27.475292  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:27.502438  373797 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:27.503728  373797 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.503754  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:27.503822  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.506369  373797 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	W1123 10:17:27.506905  373797 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:27.506973  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.507482  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.520746  373797 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:27.522141  373797 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:23.293716  371192 addons.go:530] duration metric: took 2.091867033s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:17:23.784999  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.789545  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.789569  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:24.285244  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:24.290382  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:17:24.291908  371192 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:24.291943  371192 api_server.go:131] duration metric: took 1.007520894s to wait for apiserver health ...
	I1123 10:17:24.291958  371192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:24.295996  371192 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:24.296039  371192 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.296051  371192 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.296061  371192 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.296079  371192 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.296121  371192 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.296136  371192 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.296144  371192 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.296159  371192 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.296167  371192 system_pods.go:74] duration metric: took 4.202627ms to wait for pod list to return data ...
	I1123 10:17:24.296176  371192 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:24.298844  371192 default_sa.go:45] found service account: "default"
	I1123 10:17:24.298867  371192 default_sa.go:55] duration metric: took 2.684141ms for default service account to be created ...
	I1123 10:17:24.298878  371192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:24.301765  371192 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:24.301800  371192 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.301814  371192 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.301825  371192 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.301839  371192 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.301852  371192 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.301865  371192 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.301877  371192 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.301893  371192 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.301907  371192 system_pods.go:126] duration metric: took 3.021865ms to wait for k8s-apps to be running ...
	I1123 10:17:24.301921  371192 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:24.301973  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:24.318330  371192 system_svc.go:56] duration metric: took 16.399439ms WaitForService to wait for kubelet
	I1123 10:17:24.318363  371192 kubeadm.go:587] duration metric: took 3.1165169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:24.318385  371192 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:24.322994  371192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:24.323037  371192 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:24.323054  371192 node_conditions.go:105] duration metric: took 4.663725ms to run NodePressure ...
	I1123 10:17:24.323070  371192 start.go:242] waiting for startup goroutines ...
	I1123 10:17:24.323078  371192 start.go:247] waiting for cluster config update ...
	I1123 10:17:24.323103  371192 start.go:256] writing updated cluster config ...
	I1123 10:17:24.323457  371192 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:24.329879  371192 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:24.335776  371192 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:26.342596  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:26.275186  371315 out.go:252]   - Generating certificates and keys ...
	I1123 10:17:26.275352  371315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:17:26.275478  371315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:17:27.203820  371315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:17:27.842679  371315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1123 10:17:25.414040  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:27.423694  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:27.523106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:27.523125  373797 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:27.523187  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.544410  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.546884  373797 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.546911  373797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:27.547054  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.554028  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.584494  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.729896  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:27.729923  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:27.730389  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:27.748713  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.762305  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:27.762345  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:27.773616  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.783643  373797 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:27.816165  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:27.816196  373797 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:27.853683  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:27.853715  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:27.895194  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:27.895222  373797 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:27.929349  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:27.929380  373797 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:27.952056  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:27.952129  373797 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:27.972228  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:27.972259  373797 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:27.995106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:27.995291  373797 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:28.022880  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:30.169450  373797 node_ready.go:49] node "embed-certs-412306" is "Ready"
	I1123 10:17:30.169488  373797 node_ready.go:38] duration metric: took 2.385791286s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:30.169508  373797 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:30.169570  373797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:30.263935  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.515175318s)
	I1123 10:17:30.844237  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.070570716s)
	I1123 10:17:30.844367  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.821379534s)
	I1123 10:17:30.844403  373797 api_server.go:72] duration metric: took 3.371939039s to wait for apiserver process to appear ...
	I1123 10:17:30.844420  373797 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:30.844441  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:30.846035  373797 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-412306 addons enable metrics-server
	
	I1123 10:17:30.847355  373797 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:17:28.139930  371315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:17:28.712709  371315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:17:28.816265  371315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:17:28.816782  371315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.335727  371315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:17:29.335950  371315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.643887  371315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:17:30.187228  371315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:17:30.521995  371315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:17:30.522113  371315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:17:30.784711  371315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:17:31.090260  371315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:17:31.313967  371315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:17:31.369836  371315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:17:31.747785  371315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:17:31.748584  371315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:17:31.753537  371315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 10:17:28.348145  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:30.843172  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:31.754796  371315 out.go:252]   - Booting up control plane ...
	I1123 10:17:31.754943  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:17:31.755055  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:17:31.755934  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:17:31.779002  371315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:17:31.779431  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:17:31.788946  371315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:17:31.789330  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:17:31.789392  371315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:17:31.939409  371315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:17:31.939585  371315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:17:29.940244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:32.465244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:30.848716  373797 addons.go:530] duration metric: took 3.378601039s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:17:30.850138  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:30.850165  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.345352  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.353137  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:31.353176  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.844492  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.850813  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 10:17:31.852077  373797 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:31.852127  373797 api_server.go:131] duration metric: took 1.007698573s to wait for apiserver health ...
	I1123 10:17:31.852139  373797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:31.855854  373797 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:31.855888  373797 system_pods.go:61] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.855899  373797 system_pods.go:61] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.855905  373797 system_pods.go:61] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.855914  373797 system_pods.go:61] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.855923  373797 system_pods.go:61] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.855929  373797 system_pods.go:61] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.855939  373797 system_pods.go:61] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.855944  373797 system_pods.go:61] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.855952  373797 system_pods.go:74] duration metric: took 3.805802ms to wait for pod list to return data ...
	I1123 10:17:31.855961  373797 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:31.858650  373797 default_sa.go:45] found service account: "default"
	I1123 10:17:31.858679  373797 default_sa.go:55] duration metric: took 2.711408ms for default service account to be created ...
	I1123 10:17:31.858690  373797 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:31.862049  373797 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:31.862079  373797 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.862105  373797 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.862124  373797 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.862134  373797 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.862144  373797 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.862150  373797 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.862163  373797 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.862169  373797 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.862179  373797 system_pods.go:126] duration metric: took 3.483683ms to wait for k8s-apps to be running ...
	I1123 10:17:31.862188  373797 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:31.862236  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:31.880556  373797 system_svc.go:56] duration metric: took 18.357008ms WaitForService to wait for kubelet
	I1123 10:17:31.880607  373797 kubeadm.go:587] duration metric: took 4.408143491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:31.880631  373797 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:31.884219  373797 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:31.884253  373797 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:31.884271  373797 node_conditions.go:105] duration metric: took 3.634037ms to run NodePressure ...
	I1123 10:17:31.884287  373797 start.go:242] waiting for startup goroutines ...
	I1123 10:17:31.884299  373797 start.go:247] waiting for cluster config update ...
	I1123 10:17:31.884319  373797 start.go:256] writing updated cluster config ...
	I1123 10:17:31.884624  373797 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:31.889946  373797 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:31.894375  373797 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:33.901572  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:33.523784  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:35.846995  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:32.941081  371315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001868854s
	I1123 10:17:32.945152  371315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:17:32.945305  371315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1123 10:17:32.945433  371315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:17:32.945515  371315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:17:35.861865  371315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.916644987s
	I1123 10:17:36.776622  371315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.831435695s
	I1123 10:17:38.447477  371315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502246404s
	I1123 10:17:38.458614  371315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:17:38.467767  371315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:17:38.476049  371315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:17:38.476376  371315 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-772252 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:17:38.484454  371315 kubeadm.go:319] [bootstrap-token] Using token: 7c739u.zwt0bal8xrfj12xj
	W1123 10:17:34.916285  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:37.413216  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:36.400976  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:38.912096  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:38.485658  371315 out.go:252]   - Configuring RBAC rules ...
	I1123 10:17:38.485833  371315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:17:38.489646  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:17:38.494425  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:17:38.496889  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:17:38.499031  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:17:38.501264  371315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:17:38.853661  371315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:17:39.273659  371315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:17:39.853812  371315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:17:39.855808  371315 kubeadm.go:319] 
	I1123 10:17:39.855908  371315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:17:39.855921  371315 kubeadm.go:319] 
	I1123 10:17:39.856050  371315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:17:39.856060  371315 kubeadm.go:319] 
	I1123 10:17:39.856130  371315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:17:39.856198  371315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:17:39.856261  371315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:17:39.856271  371315 kubeadm.go:319] 
	I1123 10:17:39.856335  371315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:17:39.856340  371315 kubeadm.go:319] 
	I1123 10:17:39.856394  371315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:17:39.856399  371315 kubeadm.go:319] 
	I1123 10:17:39.856459  371315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:17:39.856552  371315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:17:39.856635  371315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:17:39.856644  371315 kubeadm.go:319] 
	I1123 10:17:39.856747  371315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:17:39.856841  371315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:17:39.856850  371315 kubeadm.go:319] 
	I1123 10:17:39.856946  371315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857068  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:17:39.857106  371315 kubeadm.go:319] 	--control-plane 
	I1123 10:17:39.857112  371315 kubeadm.go:319] 
	I1123 10:17:39.857223  371315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:17:39.857231  371315 kubeadm.go:319] 
	I1123 10:17:39.857360  371315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857522  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:17:39.861171  371315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:17:39.861361  371315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:17:39.861384  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:39.861392  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:39.863656  371315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:17:38.341179  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:40.341963  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:39.864757  371315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:17:39.869984  371315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:17:39.870008  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:17:39.886324  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:17:40.362280  371315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-772252 minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=default-k8s-diff-port-772252 minikube.k8s.io/primary=true
	I1123 10:17:40.379214  371315 ops.go:34] apiserver oom_adj: -16
	I1123 10:17:40.464921  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.965405  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.465003  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.965821  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:42.464950  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 10:17:39.414230  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.914196  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.400282  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:43.899909  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:42.965639  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.465528  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.965079  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.464998  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.965763  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:45.037128  371315 kubeadm.go:1114] duration metric: took 4.67480031s to wait for elevateKubeSystemPrivileges
	I1123 10:17:45.037171  371315 kubeadm.go:403] duration metric: took 19.073779602s to StartCluster
	I1123 10:17:45.037193  371315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.037267  371315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:45.039120  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.039419  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:17:45.039444  371315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:45.039520  371315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:45.039628  371315 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039656  371315 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039686  371315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772252"
	I1123 10:17:45.039661  371315 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.039720  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:45.039784  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.040159  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.040405  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.041405  371315 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:45.042675  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:45.064542  371315 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.064587  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.064919  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.065873  371315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:45.067076  371315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.067111  371315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:45.067169  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.085477  371315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.085507  371315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:45.086250  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.092224  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.114171  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.126365  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:17:45.189744  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:45.218033  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.235955  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.315901  371315 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:17:45.317142  371315 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:45.535405  371315 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:17:42.843988  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:45.342896  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:45.536493  371315 addons.go:530] duration metric: took 496.970486ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:17:45.820948  371315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-772252" context rescaled to 1 replicas
	W1123 10:17:47.319425  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:43.914556  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:46.414198  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:45.900010  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.900260  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.841815  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:50.341880  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:49.319741  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:51.320336  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:48.913341  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:51.412869  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:53.413536  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:50.400011  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:52.900077  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:53.913334  366730 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:17:53.913363  366730 pod_ready.go:86] duration metric: took 39.505598501s for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.916455  366730 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.920979  366730 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.921004  366730 pod_ready.go:86] duration metric: took 4.524758ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.923876  366730 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.928363  366730 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.928389  366730 pod_ready.go:86] duration metric: took 4.49134ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.931268  366730 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.111689  366730 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:17:54.111728  366730 pod_ready.go:86] duration metric: took 180.43869ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.312490  366730 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.711645  366730 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:17:54.711677  366730 pod_ready.go:86] duration metric: took 399.161367ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.912461  366730 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311759  366730 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:17:55.311784  366730 pod_ready.go:86] duration metric: took 399.295747ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311813  366730 pod_ready.go:40] duration metric: took 40.908845551s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:55.356075  366730 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:17:55.357834  366730 out.go:203] 
	W1123 10:17:55.359077  366730 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:17:55.360393  366730 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:17:55.361705  366730 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	W1123 10:17:52.841432  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:55.341775  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:57.341870  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:53.320896  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:55.820856  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	I1123 10:17:56.320034  371315 node_ready.go:49] node "default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:56.320062  371315 node_ready.go:38] duration metric: took 11.002894749s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:56.320077  371315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:56.320168  371315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:56.333026  371315 api_server.go:72] duration metric: took 11.293527033s to wait for apiserver process to appear ...
	I1123 10:17:56.333046  371315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:56.333064  371315 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1123 10:17:56.337320  371315 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1123 10:17:56.338383  371315 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:56.338411  371315 api_server.go:131] duration metric: took 5.357543ms to wait for apiserver health ...
	I1123 10:17:56.338423  371315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:56.342472  371315 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:56.342509  371315 system_pods.go:61] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.342517  371315 system_pods.go:61] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.342525  371315 system_pods.go:61] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.342531  371315 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.342538  371315 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.342542  371315 system_pods.go:61] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.342549  371315 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.342554  371315 system_pods.go:61] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.342565  371315 system_pods.go:74] duration metric: took 4.133412ms to wait for pod list to return data ...
	I1123 10:17:56.342577  371315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:56.344836  371315 default_sa.go:45] found service account: "default"
	I1123 10:17:56.344858  371315 default_sa.go:55] duration metric: took 2.273737ms for default service account to be created ...
	I1123 10:17:56.344868  371315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:56.347696  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.347728  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.347736  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.347744  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.347754  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.347760  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.347768  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.347773  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.347778  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.347800  371315 retry.go:31] will retry after 302.24178ms: missing components: kube-dns
	I1123 10:17:56.653773  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.653806  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.653815  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.653820  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.653830  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.653835  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.653840  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.653846  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.653851  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.653871  371315 retry.go:31] will retry after 265.267308ms: missing components: kube-dns
	I1123 10:17:56.923296  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.923348  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.923356  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.923382  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.923389  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.923401  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.923407  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.923412  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.923417  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.923434  371315 retry.go:31] will retry after 380.263968ms: missing components: kube-dns
	I1123 10:17:57.307510  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:57.307546  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Running
	I1123 10:17:57.307554  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:57.307562  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:57.307568  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:57.307572  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:57.307577  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:57.307581  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:57.307586  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:57.307596  371315 system_pods.go:126] duration metric: took 962.72072ms to wait for k8s-apps to be running ...
	I1123 10:17:57.307606  371315 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:57.307658  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:57.320972  371315 system_svc.go:56] duration metric: took 13.353924ms WaitForService to wait for kubelet
	I1123 10:17:57.321004  371315 kubeadm.go:587] duration metric: took 12.281511348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:57.321022  371315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:57.323660  371315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:57.323692  371315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:57.323712  371315 node_conditions.go:105] duration metric: took 2.684637ms to run NodePressure ...
	I1123 10:17:57.323726  371315 start.go:242] waiting for startup goroutines ...
	I1123 10:17:57.323742  371315 start.go:247] waiting for cluster config update ...
	I1123 10:17:57.323759  371315 start.go:256] writing updated cluster config ...
	I1123 10:17:57.324067  371315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:57.328141  371315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:57.331589  371315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.335257  371315 pod_ready.go:94] pod "coredns-66bc5c9577-c5c4c" is "Ready"
	I1123 10:17:57.335285  371315 pod_ready.go:86] duration metric: took 3.674367ms for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.337137  371315 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.341306  371315 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.341329  371315 pod_ready.go:86] duration metric: took 4.173911ms for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.343139  371315 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.346731  371315 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.346750  371315 pod_ready.go:86] duration metric: took 3.589943ms for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.348459  371315 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.732573  371315 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.732607  371315 pod_ready.go:86] duration metric: took 384.128293ms for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.932984  371315 pod_ready.go:83] waiting for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.331761  371315 pod_ready.go:94] pod "kube-proxy-xfghg" is "Ready"
	I1123 10:17:58.331788  371315 pod_ready.go:86] duration metric: took 398.77791ms for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.533376  371315 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932675  371315 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:58.932705  371315 pod_ready.go:86] duration metric: took 399.30371ms for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932717  371315 pod_ready.go:40] duration metric: took 1.604548656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:58.976709  371315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:17:58.978487  371315 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772252" cluster and "default" namespace by default
	W1123 10:17:55.399817  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:57.899557  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:59.840864  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:18:00.341361  371192 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:18:00.341391  371192 pod_ready.go:86] duration metric: took 36.00558292s for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.344015  371192 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.348659  371192 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:18:00.348689  371192 pod_ready.go:86] duration metric: took 4.650364ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.351238  371192 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.354817  371192 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:18:00.354840  371192 pod_ready.go:86] duration metric: took 3.5776ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.356850  371192 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.540127  371192 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:18:00.540160  371192 pod_ready.go:86] duration metric: took 183.289677ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.740192  371192 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.139411  371192 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:18:01.139439  371192 pod_ready.go:86] duration metric: took 399.218147ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.340436  371192 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740259  371192 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:18:01.740295  371192 pod_ready.go:86] duration metric: took 399.829885ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740307  371192 pod_ready.go:40] duration metric: took 37.410392677s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:01.788412  371192 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:01.791159  371192 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	W1123 10:18:00.399534  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:18:02.400234  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:18:02.899900  373797 pod_ready.go:94] pod "coredns-66bc5c9577-fxl7j" is "Ready"
	I1123 10:18:02.899931  373797 pod_ready.go:86] duration metric: took 31.005531566s for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.902103  373797 pod_ready.go:83] waiting for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.905655  373797 pod_ready.go:94] pod "etcd-embed-certs-412306" is "Ready"
	I1123 10:18:02.905688  373797 pod_ready.go:86] duration metric: took 3.561728ms for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.907483  373797 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.911179  373797 pod_ready.go:94] pod "kube-apiserver-embed-certs-412306" is "Ready"
	I1123 10:18:02.911205  373797 pod_ready.go:86] duration metric: took 3.701799ms for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.912993  373797 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.099021  373797 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412306" is "Ready"
	I1123 10:18:03.099054  373797 pod_ready.go:86] duration metric: took 186.04071ms for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.298482  373797 pod_ready.go:83] waiting for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.697866  373797 pod_ready.go:94] pod "kube-proxy-2vnjq" is "Ready"
	I1123 10:18:03.697900  373797 pod_ready.go:86] duration metric: took 399.390791ms for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.898175  373797 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298226  373797 pod_ready.go:94] pod "kube-scheduler-embed-certs-412306" is "Ready"
	I1123 10:18:04.298262  373797 pod_ready.go:86] duration metric: took 400.039787ms for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298279  373797 pod_ready.go:40] duration metric: took 32.408301003s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:04.344316  373797 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:04.346173  373797 out.go:179] * Done! kubectl is now configured to use "embed-certs-412306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.325300331Z" level=info msg="Created container ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6/kubernetes-dashboard" id=f9f20476-91ec-410b-bf0d-9f737f243302 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.326308967Z" level=info msg="Starting container: ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387" id=c40516c2-adfb-4096-9029-8d4b18bd58e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:34 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:34.328762005Z" level=info msg="Started container" PID=1727 containerID=ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6/kubernetes-dashboard id=c40516c2-adfb-4096-9029-8d4b18bd58e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1b402a615ce15cd50896f7a31664d779f1503cbc4c093744eedd8055d129f91
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.38624378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6a5a20c-5a2b-4bb6-86dc-5bb8f466f4e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.387156157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4dea1b1b-4006-4d5c-a603-075272002f0e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.388257273Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00dc5dcf-5004-43df-a8b8-fa06a1a3d0da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.388392657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392372509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392535809Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/364a7bd399a56469353e34c2a3024e985260161a2ec036c466fd751721d832af/merged/etc/passwd: no such file or directory"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392565464Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/364a7bd399a56469353e34c2a3024e985260161a2ec036c466fd751721d832af/merged/etc/group: no such file or directory"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.392826582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.428333836Z" level=info msg="Created container 9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af: kube-system/storage-provisioner/storage-provisioner" id=00dc5dcf-5004-43df-a8b8-fa06a1a3d0da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.4289488Z" level=info msg="Starting container: 9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af" id=fd937c79-d3a7-437a-894b-36f81ab22368 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:44 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:44.43071366Z" level=info msg="Started container" PID=1750 containerID=9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af description=kube-system/storage-provisioner/storage-provisioner id=fd937c79-d3a7-437a-894b-36f81ab22368 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64dba340508095d478402b9079b1d6b5291174a1866c818346f19af2629b3cc2
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.269045599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=119ac1b8-f6ab-4390-a2b8-ceaa45552537 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.269927942Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a8ed94ab-5b2f-4c4b-b7c6-66a3db7af03c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.270950253Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=2543e1ba-ab94-4fc5-b05f-73c3ec5f2127 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.271106802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.276939758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.277566388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.313623713Z" level=info msg="Created container 23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=2543e1ba-ab94-4fc5-b05f-73c3ec5f2127 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.314141298Z" level=info msg="Starting container: 23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214" id=7990174e-f25d-4868-bc11-65fbe40c6f57 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.3192972Z" level=info msg="Started container" PID=1765 containerID=23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper id=7990174e-f25d-4868-bc11-65fbe40c6f57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63aae10b094b91f41f18467b9362839b528d5d307d551876eeb50d04b9ed8d09
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.405594899Z" level=info msg="Removing container: ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f" id=ed264d4c-e1e3-40bd-a0f8-567c5eb3db79 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:50 old-k8s-version-990757 crio[572]: time="2025-11-23T10:17:50.415664742Z" level=info msg="Removed container ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn/dashboard-metrics-scraper" id=ed264d4c-e1e3-40bd-a0f8-567c5eb3db79 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	23ccf4ce86c66       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   63aae10b094b9       dashboard-metrics-scraper-5f989dc9cf-bfhkn       kubernetes-dashboard
	9ccd16d74353c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   64dba34050809       storage-provisioner                              kube-system
	ffe2f07102353       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago       Running             kubernetes-dashboard        0                   c1b402a615ce1       kubernetes-dashboard-8694d4445c-fm8f6            kubernetes-dashboard
	d3e2f1261d87f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   d48425b59c112       busybox                                          default
	a66dd032f72a2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   6af5b2fceaabe       coredns-5dd5756b68-fsbfv                         kube-system
	7d2173a013595       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   4e20f159863aa       kube-proxy-99g4b                                 kube-system
	cbaeadd56435f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   9d011d0f754b6       kindnet-nz2m9                                    kube-system
	c6bd46fb7d986       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   64dba34050809       storage-provisioner                              kube-system
	556e97942a390       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   5af189241ebaf       kube-apiserver-old-k8s-version-990757            kube-system
	674b4af1a0427       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   b1ead877871ac       kube-controller-manager-old-k8s-version-990757   kube-system
	c9e0d8276aa07       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   8ed9b1c11741d       kube-scheduler-old-k8s-version-990757            kube-system
	ebac26e4ce8f3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   67782f4153cde       etcd-old-k8s-version-990757                      kube-system
	
	
	==> coredns [a66dd032f72a291c4b9137f10802d9fbf947163ac4ec744f05cff426d166d072] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34267 - 10079 "HINFO IN 3708039612012200968.3694113916681524421. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04671039s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-990757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-990757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-990757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-990757
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:17:43 +0000   Sun, 23 Nov 2025 10:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-990757
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                63027792-4520-472e-b216-dd92789c4530
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-fsbfv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-old-k8s-version-990757                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m7s
	  kube-system                 kindnet-nz2m9                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-990757             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-990757    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-99g4b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-990757             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-bfhkn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fm8f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s               kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s               kubelet          Node old-k8s-version-990757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s               kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node old-k8s-version-990757 event: Registered Node old-k8s-version-990757 in Controller
	  Normal  NodeReady                100s               kubelet          Node old-k8s-version-990757 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-990757 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-990757 event: Registered Node old-k8s-version-990757 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [ebac26e4ce8f31e1b8f09e6ec06a5c05e6707bb591cc39abd93e16c3ee829fcc] <==
	{"level":"info","ts":"2025-11-23T10:17:10.82653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:17:10.826561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T10:17:10.826683Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T10:17:10.826803Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:17:10.826841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:17:10.830075Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:17:10.830176Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:17:10.830239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:17:10.830444Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:17:10.830482Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:17:11.916364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:17:11.916457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.916484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:17:11.917239Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-990757 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:17:11.917243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:17:11.917259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:17:11.917532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:17:11.917585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:17:11.918702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:17:11.918714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:17:18.379883Z","caller":"traceutil/trace.go:171","msg":"trace[961230357] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"166.560111ms","start":"2025-11-23T10:17:18.213304Z","end":"2025-11-23T10:17:18.379864Z","steps":["trace[961230357] 'process raft request'  (duration: 126.430146ms)","trace[961230357] 'compare'  (duration: 39.999941ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:18:12 up  3:00,  0 user,  load average: 4.54, 4.99, 2.99
	Linux old-k8s-version-990757 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cbaeadd56435f3be2e882ca71a5e4c2a576610a12fea8a213be3214b68289f60] <==
	I1123 10:17:13.908674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:13.908924       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:17:13.910907       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:13.910936       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:13.910972       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:14.207283       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:14.207432       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:14.207475       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:14.208344       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:14.407726       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:14.407757       1 metrics.go:72] Registering metrics
	I1123 10:17:14.407821       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:24.208531       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:24.208621       1 main.go:301] handling current node
	I1123 10:17:34.207482       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:34.207535       1 main.go:301] handling current node
	I1123 10:17:44.208016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:44.208051       1 main.go:301] handling current node
	I1123 10:17:54.207433       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:17:54.207481       1 main.go:301] handling current node
	I1123 10:18:04.214177       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:18:04.214247       1 main.go:301] handling current node
	
	
	==> kube-apiserver [556e97942a390024b57d00ce6d2dab22e5234986f456ccd01a8426510bf12dc2] <==
	I1123 10:17:12.977851       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1123 10:17:13.059875       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 10:17:13.071566       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 10:17:13.071671       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 10:17:13.076565       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:17:13.076723       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:17:13.076837       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:17:13.077492       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:17:13.077582       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:17:13.077614       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:17:13.077641       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:17:13.077865       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 10:17:13.077867       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:13.123547       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:17:13.975696       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:14.222220       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:17:14.256962       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:17:14.275372       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:14.285329       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:14.293713       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:17:14.341030       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.245.16"}
	I1123 10:17:14.360372       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.250.173"}
	I1123 10:17:25.734220       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:17:25.746892       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:17:25.766358       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [674b4af1a0427bfaca38a9f2c3d8e894dc1b8e4c4bdb0b56c34b4ab06cffe9a1] <==
	I1123 10:17:25.782711       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1123 10:17:25.782748       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1123 10:17:25.787885       1 shared_informer.go:318] Caches are synced for service account
	I1123 10:17:25.794845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.894295ms"
	I1123 10:17:25.798752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.524651ms"
	I1123 10:17:25.798971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.366µs"
	I1123 10:17:25.808007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.088266ms"
	I1123 10:17:25.808694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.383µs"
	I1123 10:17:25.809015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.062µs"
	I1123 10:17:25.821926       1 shared_informer.go:318] Caches are synced for stateful set
	I1123 10:17:25.863922       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:17:25.871939       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 10:17:25.923020       1 shared_informer.go:318] Caches are synced for persistent volume
	I1123 10:17:26.283284       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:17:26.340077       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:17:26.340137       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:17:30.362547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.809µs"
	I1123 10:17:31.368665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.643µs"
	I1123 10:17:32.461533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.571µs"
	I1123 10:17:34.390971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.180487ms"
	I1123 10:17:34.391082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.22µs"
	I1123 10:17:50.416147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.857µs"
	I1123 10:17:53.848372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.166239ms"
	I1123 10:17:53.848503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.752µs"
	I1123 10:17:56.083544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.025µs"
	
	
	==> kube-proxy [7d2173a013595020de9a41e415a6a98ae7dc0077b210812ebda0b0af5473a287] <==
	I1123 10:17:13.771082       1 server_others.go:69] "Using iptables proxy"
	I1123 10:17:13.793895       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:17:13.835778       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:13.841865       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:17:13.841933       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:17:13.841943       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:17:13.842005       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:17:13.842927       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:17:13.843129       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:13.843889       1 config.go:315] "Starting node config controller"
	I1123 10:17:13.843966       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:17:13.844511       1 config.go:188] "Starting service config controller"
	I1123 10:17:13.844539       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:17:13.844565       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:17:13.844569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:17:13.944177       1 shared_informer.go:318] Caches are synced for node config
	I1123 10:17:13.945419       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:17:13.945521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c9e0d8276aa071eee136baabda6e6268adcd34c9a47ea98e77308ea23679b766] <==
	I1123 10:17:11.439821       1 serving.go:348] Generated self-signed cert in-memory
	W1123 10:17:13.010486       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:17:13.010546       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:17:13.010561       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:17:13.010570       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:17:13.039713       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 10:17:13.039764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:13.041749       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:13.041787       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 10:17:13.044231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	W1123 10:17:13.049879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:17:13.049944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1123 10:17:13.044324       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 10:17:13.142959       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897393     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcnl\" (UniqueName: \"kubernetes.io/projected/ef986112-2b84-4018-a524-06c1bd693ed4-kube-api-access-vwcnl\") pod \"kubernetes-dashboard-8694d4445c-fm8f6\" (UID: \"ef986112-2b84-4018-a524-06c1bd693ed4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897449     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4rx\" (UniqueName: \"kubernetes.io/projected/ab90c537-1023-4768-8724-1bd443811215-kube-api-access-gc4rx\") pod \"dashboard-metrics-scraper-5f989dc9cf-bfhkn\" (UID: \"ab90c537-1023-4768-8724-1bd443811215\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897470     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab90c537-1023-4768-8724-1bd443811215-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-bfhkn\" (UID: \"ab90c537-1023-4768-8724-1bd443811215\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn"
	Nov 23 10:17:25 old-k8s-version-990757 kubelet[736]: I1123 10:17:25.897497     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ef986112-2b84-4018-a524-06c1bd693ed4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fm8f6\" (UID: \"ef986112-2b84-4018-a524-06c1bd693ed4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6"
	Nov 23 10:17:30 old-k8s-version-990757 kubelet[736]: I1123 10:17:30.343003     736 scope.go:117] "RemoveContainer" containerID="0637351a00d8d7d37ed69f59533ec14ce1fcf7142851c8a2844018d2fd3dee5b"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: I1123 10:17:31.348696     736 scope.go:117] "RemoveContainer" containerID="0637351a00d8d7d37ed69f59533ec14ce1fcf7142851c8a2844018d2fd3dee5b"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: I1123 10:17:31.348998     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:31 old-k8s-version-990757 kubelet[736]: E1123 10:17:31.350795     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:32 old-k8s-version-990757 kubelet[736]: I1123 10:17:32.354052     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:32 old-k8s-version-990757 kubelet[736]: E1123 10:17:32.354507     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:34 old-k8s-version-990757 kubelet[736]: I1123 10:17:34.379799     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fm8f6" podStartSLOduration=1.225950688 podCreationTimestamp="2025-11-23 10:17:25 +0000 UTC" firstStartedPulling="2025-11-23 10:17:26.111594433 +0000 UTC m=+15.948173439" lastFinishedPulling="2025-11-23 10:17:34.265375983 +0000 UTC m=+24.101954990" observedRunningTime="2025-11-23 10:17:34.377994742 +0000 UTC m=+24.214573752" watchObservedRunningTime="2025-11-23 10:17:34.379732239 +0000 UTC m=+24.216311250"
	Nov 23 10:17:36 old-k8s-version-990757 kubelet[736]: I1123 10:17:36.072809     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:36 old-k8s-version-990757 kubelet[736]: E1123 10:17:36.073296     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:44 old-k8s-version-990757 kubelet[736]: I1123 10:17:44.385679     736 scope.go:117] "RemoveContainer" containerID="c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.268516     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.404399     736 scope.go:117] "RemoveContainer" containerID="ac1800cd9d6bd93eb082a400dd68302dc038514b14aec60a85e0f0add9ad305f"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: I1123 10:17:50.404703     736 scope.go:117] "RemoveContainer" containerID="23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	Nov 23 10:17:50 old-k8s-version-990757 kubelet[736]: E1123 10:17:50.405062     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:17:56 old-k8s-version-990757 kubelet[736]: I1123 10:17:56.071759     736 scope.go:117] "RemoveContainer" containerID="23ccf4ce86c662244f4b739e4ab18cdc793df7a827799056f377d3f50eab0214"
	Nov 23 10:17:56 old-k8s-version-990757 kubelet[736]: E1123 10:17:56.072204     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bfhkn_kubernetes-dashboard(ab90c537-1023-4768-8724-1bd443811215)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bfhkn" podUID="ab90c537-1023-4768-8724-1bd443811215"
	Nov 23 10:18:07 old-k8s-version-990757 kubelet[736]: I1123 10:18:07.501752     736 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:07 old-k8s-version-990757 systemd[1]: kubelet.service: Consumed 1.667s CPU time.
	
	
	==> kubernetes-dashboard [ffe2f071023537db208786f25a6aea227c1fe39c1b3f10f869486618924f5387] <==
	2025/11/23 10:17:34 Starting overwatch
	2025/11/23 10:17:34 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:34 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:34 Using secret token for csrf signing
	2025/11/23 10:17:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:34 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 10:17:34 Generating JWE encryption key
	2025/11/23 10:17:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:34 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:34 Creating in-cluster Sidecar client
	2025/11/23 10:17:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:34 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9ccd16d74353c15e1600527cf40023e30033f332b977b03880686a3913da40af] <==
	I1123 10:17:44.443476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:17:44.451494       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:17:44.451543       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:18:01.848951       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:01.849052       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35efb046-0c13-4b37-bd0a-2155a92525f0", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35 became leader
	I1123 10:18:01.849127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35!
	I1123 10:18:01.949403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-990757_d2f4633f-8f97-4e35-b33f-041482bd8d35!
	
	
	==> storage-provisioner [c6bd46fb7d9861dd655a23db64bd18f5e89613a832e4638352e74fcf52951f8f] <==
	I1123 10:17:13.722967       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:17:43.725573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-990757 -n old-k8s-version-990757
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-990757 -n old-k8s-version-990757: exit status 2 (346.260618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-990757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.959996ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-772252 describe deploy/metrics-server -n kube-system: exit status 1 (59.427866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-772252 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772252
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	        "Created": "2025-11-23T10:17:18.483940214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372847,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:18.527177222Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hosts",
	        "LogPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd-json.log",
	        "Name": "/default-k8s-diff-port-772252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-772252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	                "LowerDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "acc1206dd9dddd9211ac88946f0fa4869141c429d6b36ab8f518f3a1cbbacfb0",
	            "SandboxKey": "/var/run/docker/netns/acc1206dd9dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-772252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02dae00b49d770f584e401c586980c0831e8332aaaff622d8a3a7b262132c748",
	                    "EndpointID": "f792e0bd469ed82ccb5f7aedac9232d4b1bf240de7bf0dc10eb89fba761b306e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2a:a7:17:22:41:d0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772252",
	                        "e477e779f8bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25: (1.113371814s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-791161 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo docker system info                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cri-dockerd --version                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                        │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                          │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                          │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:17:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:17:19.609492  373797 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:17:19.609729  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609737  373797 out.go:374] Setting ErrFile to fd 2...
	I1123 10:17:19.609741  373797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:19.609928  373797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:17:19.610361  373797 out.go:368] Setting JSON to false
	I1123 10:17:19.611590  373797 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10781,"bootTime":1763882259,"procs":496,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:17:19.611646  373797 start.go:143] virtualization: kvm guest
	I1123 10:17:19.613670  373797 out.go:179] * [embed-certs-412306] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:17:19.614888  373797 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:17:19.614881  373797 notify.go:221] Checking for updates...
	I1123 10:17:19.616064  373797 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:17:19.617045  373797 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:19.617927  373797 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:17:19.618967  373797 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:17:19.619935  373797 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:17:19.621299  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:19.621911  373797 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:17:19.648614  373797 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:17:19.648746  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.710021  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.699419611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.710161  373797 docker.go:319] overlay module found
	I1123 10:17:19.712107  373797 out.go:179] * Using the docker driver based on existing profile
	I1123 10:17:19.713258  373797 start.go:309] selected driver: docker
	I1123 10:17:19.713275  373797 start.go:927] validating driver "docker" against &{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.713374  373797 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:17:19.713898  373797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:17:19.779691  373797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:17:19.765216478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:17:19.779989  373797 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:19.780023  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:19.780080  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:19.780271  373797 start.go:353] cluster config:
	{Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:19.782420  373797 out.go:179] * Starting "embed-certs-412306" primary control-plane node in "embed-certs-412306" cluster
	I1123 10:17:19.783638  373797 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:17:19.785045  373797 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:17:19.786269  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:19.786307  373797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:17:19.786316  373797 cache.go:65] Caching tarball of preloaded images
	I1123 10:17:19.786372  373797 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:17:19.786421  373797 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:17:19.786437  373797 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:17:19.786558  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:19.811595  373797 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:17:19.811627  373797 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:17:19.811673  373797 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:17:19.811717  373797 start.go:360] acquireMachinesLock for embed-certs-412306: {Name:mk4f25fc676f86a4d15ab0bc341b16f0d56928c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:17:19.811792  373797 start.go:364] duration metric: took 48.053µs to acquireMachinesLock for "embed-certs-412306"
	I1123 10:17:19.811817  373797 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:17:19.811827  373797 fix.go:54] fixHost starting: 
	I1123 10:17:19.812155  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:19.832074  373797 fix.go:112] recreateIfNeeded on embed-certs-412306: state=Stopped err=<nil>
	W1123 10:17:19.832132  373797 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:17:18.495023  371192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:18.495055  371192 machine.go:97] duration metric: took 5.084691596s to provisionDockerMachine
	I1123 10:17:18.495069  371192 start.go:293] postStartSetup for "no-preload-541522" (driver="docker")
	I1123 10:17:18.495082  371192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:18.495215  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:18.495278  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.522688  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.634392  371192 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:18.638904  371192 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:18.638946  371192 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:18.638961  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:18.639015  371192 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:18.639129  371192 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:18.639289  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:18.650865  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:18.676275  371192 start.go:296] duration metric: took 181.188377ms for postStartSetup
	I1123 10:17:18.676398  371192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:18.676447  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.696551  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.798813  371192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:18.804200  371192 fix.go:56] duration metric: took 5.847399025s for fixHost
	I1123 10:17:18.804227  371192 start.go:83] releasing machines lock for "no-preload-541522", held for 5.847449946s
	I1123 10:17:18.804314  371192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-541522
	I1123 10:17:18.823965  371192 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:18.824026  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.824050  371192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:18.824151  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:18.846278  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:18.847666  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:19.015957  371192 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:19.023883  371192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:19.072321  371192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:19.078795  371192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:19.078868  371192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:19.088538  371192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:19.088566  371192 start.go:496] detecting cgroup driver to use...
	I1123 10:17:19.088600  371192 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:19.088643  371192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:19.110539  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:19.132949  371192 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:19.133028  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:19.150165  371192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:19.165619  371192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:19.271465  371192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:19.379873  371192 docker.go:234] disabling docker service ...
	I1123 10:17:19.379932  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:19.398139  371192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:19.412992  371192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:19.503640  371192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:19.600343  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:19.613822  371192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:19.629382  371192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:19.629446  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.640465  371192 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:19.640529  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.651535  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.661697  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.674338  371192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:19.684964  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.697156  371192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.707055  371192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:19.717460  371192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:19.725865  371192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:19.736523  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:19.829013  371192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:19.984026  371192 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:19.984148  371192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:19.988801  371192 start.go:564] Will wait 60s for crictl version
	I1123 10:17:19.988866  371192 ssh_runner.go:195] Run: which crictl
	I1123 10:17:19.993024  371192 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:20.026159  371192 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:20.026262  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.057945  371192 ssh_runner.go:195] Run: crio --version
	I1123 10:17:20.092537  371192 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:20.095052  371192 cli_runner.go:164] Run: docker network inspect no-preload-541522 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:20.113293  371192 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:20.117900  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.129916  371192 kubeadm.go:884] updating cluster {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:20.130038  371192 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:20.130098  371192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:20.168390  371192 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:20.168418  371192 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:20.168427  371192 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:20.168553  371192 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-541522 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:20.168646  371192 ssh_runner.go:195] Run: crio config
	I1123 10:17:20.221690  371192 cni.go:84] Creating CNI manager for ""
	I1123 10:17:20.221718  371192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:20.221739  371192 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:20.221769  371192 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-541522 NodeName:no-preload-541522 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:20.221955  371192 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-541522"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:20.222044  371192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:20.231152  371192 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:20.231287  371192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:20.240306  371192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:17:20.253726  371192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:20.268663  371192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 10:17:20.286013  371192 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:20.290286  371192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:20.301340  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:20.405447  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:20.425508  371192 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522 for IP: 192.168.85.2
	I1123 10:17:20.425698  371192 certs.go:195] generating shared ca certs ...
	I1123 10:17:20.425746  371192 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:20.425993  371192 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:20.426072  371192 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:20.426083  371192 certs.go:257] generating profile certs ...
	I1123 10:17:20.426244  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/client.key
	I1123 10:17:20.426355  371192 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key.29b5f89d
	I1123 10:17:20.426438  371192 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key
	I1123 10:17:20.426605  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:20.426644  371192 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:20.426655  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:20.426693  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:20.426725  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:20.426756  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:20.426822  371192 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:20.428032  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:20.456018  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:20.479658  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:20.501657  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:20.529181  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:17:20.550509  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:20.569511  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:20.588713  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/no-preload-541522/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:20.606754  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:20.625365  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:20.644697  371192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:20.662851  371192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:20.675998  371192 ssh_runner.go:195] Run: openssl version
	I1123 10:17:20.682347  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:20.691464  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695411  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.695463  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:20.730632  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:20.739401  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:20.748466  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752659  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.752735  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:20.788588  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:20.797604  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:20.806894  371192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811228  371192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.811284  371192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:20.846328  371192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:20.855328  371192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:20.859478  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:20.893578  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:20.929466  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:20.977899  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:21.020876  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:21.070653  371192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:21.123318  371192 kubeadm.go:401] StartCluster: {Name:no-preload-541522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-541522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:21.123410  371192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:21.123464  371192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:21.157433  371192 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:17:21.157457  371192 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:17:21.157464  371192 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:17:21.157469  371192 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:17:21.157473  371192 cri.go:89] found id: ""
	I1123 10:17:21.157519  371192 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:21.170853  371192 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:21.170942  371192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:21.179761  371192 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:21.179782  371192 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:21.179832  371192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:21.188635  371192 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:21.189189  371192 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-541522" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.189463  371192 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-541522" cluster setting kubeconfig missing "no-preload-541522" context setting]
	I1123 10:17:21.190011  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.191382  371192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:21.200134  371192 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:17:21.200165  371192 kubeadm.go:602] duration metric: took 20.377182ms to restartPrimaryControlPlane
	I1123 10:17:21.200176  371192 kubeadm.go:403] duration metric: took 76.869746ms to StartCluster
	I1123 10:17:21.200197  371192 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.200268  371192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:21.201522  371192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:21.201810  371192 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:21.201858  371192 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:21.201968  371192 addons.go:70] Setting storage-provisioner=true in profile "no-preload-541522"
	I1123 10:17:21.201995  371192 addons.go:239] Setting addon storage-provisioner=true in "no-preload-541522"
	W1123 10:17:21.202008  371192 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:21.202006  371192 addons.go:70] Setting dashboard=true in profile "no-preload-541522"
	I1123 10:17:21.202029  371192 addons.go:70] Setting default-storageclass=true in profile "no-preload-541522"
	I1123 10:17:21.202053  371192 addons.go:239] Setting addon dashboard=true in "no-preload-541522"
	I1123 10:17:21.202055  371192 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-541522"
	W1123 10:17:21.202063  371192 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:21.202081  371192 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:21.202038  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202110  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.202447  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202598  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.202660  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.204706  371192 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:21.206052  371192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:21.227863  371192 addons.go:239] Setting addon default-storageclass=true in "no-preload-541522"
	W1123 10:17:21.227926  371192 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:21.227956  371192 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:17:21.228549  371192 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:17:21.232585  371192 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:21.232585  371192 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:21.233696  371192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.233729  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:21.233799  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.233705  371192 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:21.234809  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:21.234828  371192 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:21.234890  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.265221  371192 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.265260  371192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:21.265326  371192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:17:21.274943  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.276965  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.296189  371192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:17:21.367731  371192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:21.382397  371192 node_ready.go:35] waiting up to 6m0s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:21.398915  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:21.401528  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:21.401552  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:21.419867  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:21.419897  371192 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:21.422575  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:21.439431  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:21.439464  371192 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:21.459190  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:21.459215  371192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:21.474803  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:21.474837  371192 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:21.490492  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:21.490520  371192 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:21.504992  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:21.505017  371192 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:21.519429  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:21.519456  371192 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:21.533295  371192 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:21.533322  371192 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:21.550435  371192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:18.396407  371315 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.434126085s)
	I1123 10:17:18.396438  371315 kic.go:203] duration metric: took 4.434295488s to extract preloaded images to volume ...
	W1123 10:17:18.396521  371315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 10:17:18.396560  371315 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 10:17:18.396604  371315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:17:18.463256  371315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-772252 --name default-k8s-diff-port-772252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-772252 --network default-k8s-diff-port-772252 --ip 192.168.103.2 --volume default-k8s-diff-port-772252:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:17:18.796638  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Running}}
	I1123 10:17:18.816868  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:18.840858  371315 cli_runner.go:164] Run: docker exec default-k8s-diff-port-772252 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:17:18.897619  371315 oci.go:144] the created container "default-k8s-diff-port-772252" has a running status.
	I1123 10:17:18.897661  371315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa...
	I1123 10:17:18.977365  371315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:17:19.006386  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.030565  371315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:17:19.030591  371315 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-772252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:17:19.079641  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:19.103668  371315 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:19.103794  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:19.133387  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:19.134363  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:19.134412  371315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:19.135234  371315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54846->127.0.0.1:33113: read: connection reset by peer
	I1123 10:17:22.290470  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.290505  371315 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772252"
	I1123 10:17:22.290581  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.310197  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.310489  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.310506  371315 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772252 && echo "default-k8s-diff-port-772252" | sudo tee /etc/hostname
	I1123 10:17:22.471190  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:17:22.471288  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.491303  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:22.491559  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:22.491595  371315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:22.649053  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:22.649118  371315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:22.649148  371315 ubuntu.go:190] setting up certificates
	I1123 10:17:22.649175  371315 provision.go:84] configureAuth start
	I1123 10:17:22.649268  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:22.670533  371315 provision.go:143] copyHostCerts
	I1123 10:17:22.670621  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:22.670640  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:22.670723  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:22.670844  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:22.670855  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:22.670899  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:22.671009  371315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:22.671020  371315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:22.671063  371315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:22.671173  371315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772252 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-772252 localhost minikube]
	I1123 10:17:22.781341  371315 provision.go:177] copyRemoteCerts
	I1123 10:17:22.781420  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:22.781468  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:22.813351  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:22.707516  371192 node_ready.go:49] node "no-preload-541522" is "Ready"
	I1123 10:17:22.707555  371192 node_ready.go:38] duration metric: took 1.325107134s for node "no-preload-541522" to be "Ready" ...
	I1123 10:17:22.707572  371192 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:22.707865  371192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:23.284024  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.885050693s)
	I1123 10:17:23.284105  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861477632s)
	I1123 10:17:23.284235  371192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.733760656s)
	I1123 10:17:23.284398  371192 api_server.go:72] duration metric: took 2.082551658s to wait for apiserver process to appear ...
	I1123 10:17:23.284414  371192 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:23.284434  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.286130  371192 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-541522 addons enable metrics-server
	
	I1123 10:17:23.289610  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.289631  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:23.292533  371192 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1123 10:17:20.914139  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:22.914473  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:19.834110  373797 out.go:252] * Restarting existing docker container for "embed-certs-412306" ...
	I1123 10:17:19.834184  373797 cli_runner.go:164] Run: docker start embed-certs-412306
	I1123 10:17:20.130659  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:20.150941  373797 kic.go:430] container "embed-certs-412306" state is running.
	I1123 10:17:20.151437  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:20.172969  373797 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/config.json ...
	I1123 10:17:20.173319  373797 machine.go:94] provisionDockerMachine start ...
	I1123 10:17:20.173400  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:20.193884  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:20.194212  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:20.194231  373797 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:17:20.195045  373797 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48678->127.0.0.1:33118: read: connection reset by peer
	I1123 10:17:23.348386  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.348432  373797 ubuntu.go:182] provisioning hostname "embed-certs-412306"
	I1123 10:17:23.348510  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.369008  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.369294  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.369309  373797 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412306 && echo "embed-certs-412306" | sudo tee /etc/hostname
	I1123 10:17:23.527808  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412306
	
	I1123 10:17:23.527905  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.552954  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.553243  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:23.553263  373797 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412306/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:17:23.705470  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:17:23.705501  373797 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:17:23.705547  373797 ubuntu.go:190] setting up certificates
	I1123 10:17:23.705570  373797 provision.go:84] configureAuth start
	I1123 10:17:23.705648  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:23.727746  373797 provision.go:143] copyHostCerts
	I1123 10:17:23.727819  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:17:23.727834  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:17:23.727904  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:17:23.728152  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:17:23.728170  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:17:23.728229  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:17:23.728394  373797 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:17:23.728408  373797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:17:23.728442  373797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:17:23.728545  373797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412306 san=[127.0.0.1 192.168.94.2 embed-certs-412306 localhost minikube]
	I1123 10:17:23.786003  373797 provision.go:177] copyRemoteCerts
	I1123 10:17:23.786110  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:17:23.786168  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:23.808607  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:23.930337  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:23.954195  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:17:23.973335  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1123 10:17:23.992599  373797 provision.go:87] duration metric: took 287.009489ms to configureAuth
	I1123 10:17:23.992633  373797 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:23.992827  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:23.992947  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.015952  373797 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:24.016359  373797 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 10:17:24.016396  373797 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:24.382671  373797 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:24.382710  373797 machine.go:97] duration metric: took 4.209367018s to provisionDockerMachine
	I1123 10:17:24.382728  373797 start.go:293] postStartSetup for "embed-certs-412306" (driver="docker")
	I1123 10:17:24.382754  373797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:24.382834  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:24.382885  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.404505  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.511869  373797 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:24.516166  373797 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:24.516207  373797 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:24.516222  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:24.516280  373797 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:24.516393  373797 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:24.516518  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:24.524244  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:24.542545  373797 start.go:296] duration metric: took 159.79015ms for postStartSetup
	I1123 10:17:24.542619  373797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:24.542668  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.563717  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:22.926511  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:17:22.950745  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:17:22.971167  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:17:22.992406  371315 provision.go:87] duration metric: took 343.209444ms to configureAuth
	I1123 10:17:22.992440  371315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:17:22.992638  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:22.992764  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.015449  371315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:17:23.015746  371315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 10:17:23.015770  371315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:17:23.334757  371315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:17:23.334787  371315 machine.go:97] duration metric: took 4.23109286s to provisionDockerMachine
	I1123 10:17:23.334800  371315 client.go:176] duration metric: took 10.163153814s to LocalClient.Create
	I1123 10:17:23.334826  371315 start.go:167] duration metric: took 10.163248519s to libmachine.API.Create "default-k8s-diff-port-772252"
	I1123 10:17:23.334840  371315 start.go:293] postStartSetup for "default-k8s-diff-port-772252" (driver="docker")
	I1123 10:17:23.334860  371315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:17:23.334929  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:17:23.334985  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.356328  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.463374  371315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:17:23.467492  371315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:17:23.467528  371315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:17:23.467542  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:17:23.467604  371315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:17:23.467697  371315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:17:23.467820  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:17:23.475956  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:23.497077  371315 start.go:296] duration metric: took 162.21628ms for postStartSetup
	I1123 10:17:23.497453  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.517994  371315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/config.json ...
	I1123 10:17:23.518317  371315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:17:23.518376  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.544356  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.649434  371315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:23.654312  371315 start.go:128] duration metric: took 10.487060831s to createHost
	I1123 10:17:23.654340  371315 start.go:83] releasing machines lock for "default-k8s-diff-port-772252", held for 10.487196123s
	I1123 10:17:23.654429  371315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:17:23.672341  371315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:23.672366  371315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:23.672402  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.672450  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:23.692134  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.692271  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:23.884469  371315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:23.894358  371315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:23.951450  371315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:23.956897  371315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:23.956984  371315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:23.983807  371315 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:17:23.983830  371315 start.go:496] detecting cgroup driver to use...
	I1123 10:17:23.983859  371315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:23.983898  371315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.001497  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.017078  371315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.017175  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.033394  371315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:24.052236  371315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:24.146681  371315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:24.245622  371315 docker.go:234] disabling docker service ...
	I1123 10:17:24.245695  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:24.267262  371315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:24.283984  371315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:24.393614  371315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:24.485577  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:24.498373  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:24.513700  371315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:24.513745  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.524969  371315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:24.525040  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.534062  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.543449  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.552383  371315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:24.562139  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.572184  371315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.587719  371315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:24.597575  371315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:24.606824  371315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:24.615535  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:24.700246  371315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:24.855040  371315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:24.855123  371315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:24.859368  371315 start.go:564] Will wait 60s for crictl version
	I1123 10:17:24.859428  371315 ssh_runner.go:195] Run: which crictl
	I1123 10:17:24.863070  371315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:24.889521  371315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:24.889599  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.920115  371315 ssh_runner.go:195] Run: crio --version
	I1123 10:17:24.954417  371315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.666037  373797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:17:24.670358  373797 fix.go:56] duration metric: took 4.858524746s for fixHost
	I1123 10:17:24.670382  373797 start.go:83] releasing machines lock for "embed-certs-412306", held for 4.858576755s
	I1123 10:17:24.670445  373797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412306
	I1123 10:17:24.688334  373797 ssh_runner.go:195] Run: cat /version.json
	I1123 10:17:24.688391  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.688402  373797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:17:24.688482  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:24.708037  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.709542  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:24.881767  373797 ssh_runner.go:195] Run: systemctl --version
	I1123 10:17:24.889568  373797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:17:24.928028  373797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:17:24.933463  373797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:17:24.933545  373797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:17:24.944053  373797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:17:24.944096  373797 start.go:496] detecting cgroup driver to use...
	I1123 10:17:24.944134  373797 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:17:24.944176  373797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:17:24.961024  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:17:24.975672  373797 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:17:24.975755  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:17:24.992860  373797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:17:25.007660  373797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:17:25.101571  373797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:17:25.187706  373797 docker.go:234] disabling docker service ...
	I1123 10:17:25.187771  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:17:25.203871  373797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:17:25.220342  373797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:17:25.310358  373797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:17:25.403221  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:17:25.417018  373797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:17:25.431507  373797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:17:25.431564  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.441415  373797 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:17:25.441481  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.450871  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.459923  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.468817  373797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:17:25.477361  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.487848  373797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.496857  373797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:17:25.506275  373797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:17:25.514119  373797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:17:25.522214  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.609285  373797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:17:25.788628  373797 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:17:25.788710  373797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:17:25.794577  373797 start.go:564] Will wait 60s for crictl version
	I1123 10:17:25.794647  373797 ssh_runner.go:195] Run: which crictl
	I1123 10:17:25.801054  373797 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:17:25.830537  373797 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:17:25.830618  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.862137  373797 ssh_runner.go:195] Run: crio --version
	I1123 10:17:25.896309  373797 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:17:24.955476  371315 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:24.975771  371315 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:24.980312  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:24.992335  371315 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:24.992470  371315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:24.992532  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.028422  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.028446  371315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.028507  371315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.062707  371315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.062731  371315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.062740  371315 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1123 10:17:25.062842  371315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.062921  371315 ssh_runner.go:195] Run: crio config
	I1123 10:17:25.111817  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:25.111854  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:25.111873  371315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:25.111897  371315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772252 NodeName:default-k8s-diff-port-772252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:25.112030  371315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:25.112105  371315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:25.120360  371315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:25.120421  371315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:25.129795  371315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1123 10:17:25.145251  371315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:25.160692  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1123 10:17:25.173307  371315 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:25.177001  371315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.187493  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:25.282599  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:25.306664  371315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252 for IP: 192.168.103.2
	I1123 10:17:25.306684  371315 certs.go:195] generating shared ca certs ...
	I1123 10:17:25.306700  371315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.306864  371315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:25.306920  371315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:25.306934  371315 certs.go:257] generating profile certs ...
	I1123 10:17:25.307023  371315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key
	I1123 10:17:25.307042  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt with IP's: []
	I1123 10:17:25.369960  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt ...
	I1123 10:17:25.369988  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.crt: {Name:mk7f4719b240e51f803a30c22478d2cf1d0e1199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370175  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key ...
	I1123 10:17:25.370199  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key: {Name:mkd811194a7ece5d786aacc912a42bc560ea4296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.370292  371315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1
	I1123 10:17:25.370312  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 10:17:25.423997  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 ...
	I1123 10:17:25.424030  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1: {Name:mk6de12f0748b003728065f4169ec8bcc4410f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424186  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 ...
	I1123 10:17:25.424201  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1: {Name:mkfeca4687eb3d49033d88eae184a2c0e40ab44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.424294  371315 certs.go:382] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt
	I1123 10:17:25.424406  371315 certs.go:386] copying /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1 -> /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key
	I1123 10:17:25.424489  371315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key
	I1123 10:17:25.424508  371315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt with IP's: []
	I1123 10:17:25.484984  371315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt ...
	I1123 10:17:25.485010  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt: {Name:mkc9c6bf8ac400416e9eb1893c09433f60578057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485213  371315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key ...
	I1123 10:17:25.485235  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key: {Name:mk504063bf5acfe6751f65cfaba17411b52827e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:25.485488  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:25.485543  371315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:25.485559  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:25.485600  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:25.485631  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:25.485652  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:25.485702  371315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:25.486510  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:25.505646  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:25.524124  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:25.543811  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:25.568526  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:17:25.588007  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:17:25.606546  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:25.626591  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:17:25.647854  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:25.673928  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:25.698071  371315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:25.717953  371315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:25.733564  371315 ssh_runner.go:195] Run: openssl version
	I1123 10:17:25.743071  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:25.755937  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762383  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.762464  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:25.817928  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:25.829386  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:25.840669  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845206  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.845259  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:25.884816  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:25.895209  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:25.905009  371315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909147  371315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.909212  371315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:25.947660  371315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:25.958547  371315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:25.963329  371315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:17:25.963400  371315 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:25.963515  371315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:25.963592  371315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:25.994552  371315 cri.go:89] found id: ""
	I1123 10:17:25.994632  371315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:26.004720  371315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:17:26.014394  371315 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:17:26.014465  371315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:17:26.023894  371315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:17:26.023927  371315 kubeadm.go:158] found existing configuration files:
	
	I1123 10:17:26.023984  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 10:17:26.032407  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:17:26.032468  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:17:26.041623  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 10:17:26.054201  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:17:26.054261  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:17:26.066701  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.079955  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:17:26.080191  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:17:26.093784  371315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 10:17:26.105549  371315 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:17:26.105617  371315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:17:26.115532  371315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:17:26.160623  371315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:17:26.160969  371315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:17:26.186117  371315 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:17:26.186236  371315 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 10:17:26.186285  371315 kubeadm.go:319] OS: Linux
	I1123 10:17:26.186354  371315 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:17:26.186447  371315 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:17:26.186539  371315 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:17:26.186616  371315 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:17:26.186682  371315 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:17:26.186746  371315 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:17:26.186824  371315 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:17:26.186884  371315 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 10:17:26.263125  371315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:17:26.263295  371315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:17:26.263483  371315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:17:26.272376  371315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:17:25.897306  373797 cli_runner.go:164] Run: docker network inspect embed-certs-412306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:17:25.917131  373797 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 10:17:25.921503  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:25.932797  373797 kubeadm.go:884] updating cluster {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:17:25.932962  373797 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:17:25.933022  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.971485  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.971507  373797 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:17:25.971565  373797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:17:25.998401  373797 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:17:25.998430  373797 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:17:25.998439  373797 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 10:17:25.998565  373797 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-412306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:17:25.998651  373797 ssh_runner.go:195] Run: crio config
	I1123 10:17:26.054182  373797 cni.go:84] Creating CNI manager for ""
	I1123 10:17:26.054212  373797 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:26.054230  373797 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:17:26.054261  373797 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412306 NodeName:embed-certs-412306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:17:26.054449  373797 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-412306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:17:26.054528  373797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:17:26.069247  373797 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:17:26.069315  373797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:17:26.084536  373797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:17:26.105237  373797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:17:26.122042  373797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:17:26.135463  373797 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:17:26.139894  373797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:17:26.152470  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:26.259400  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:26.293349  373797 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306 for IP: 192.168.94.2
	I1123 10:17:26.293376  373797 certs.go:195] generating shared ca certs ...
	I1123 10:17:26.293398  373797 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:26.293563  373797 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:17:26.293621  373797 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:17:26.293631  373797 certs.go:257] generating profile certs ...
	I1123 10:17:26.293719  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/client.key
	I1123 10:17:26.293765  373797 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key.7dd66a37
	I1123 10:17:26.293798  373797 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key
	I1123 10:17:26.293962  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:17:26.294032  373797 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:17:26.294043  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:17:26.294080  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:17:26.294150  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:17:26.294182  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:17:26.294239  373797 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:17:26.295078  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:17:26.319354  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:17:26.346624  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:17:26.375357  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:17:26.408580  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:17:26.438245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:17:26.463452  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:17:26.491192  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/embed-certs-412306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:17:26.535358  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:17:26.564257  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:17:26.589245  373797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:17:26.615973  373797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:17:26.634980  373797 ssh_runner.go:195] Run: openssl version
	I1123 10:17:26.643923  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:17:26.658008  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663894  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.663963  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:17:26.725019  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:17:26.741335  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:17:26.754306  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760205  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.760289  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:17:26.817066  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:17:26.828242  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:17:26.840286  373797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845608  373797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.845667  373797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:17:26.907823  373797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:17:26.920712  373797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:17:26.926906  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:17:26.993735  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:17:27.067117  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:17:27.144625  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:17:27.218572  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:17:27.280794  373797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:17:27.347949  373797 kubeadm.go:401] StartCluster: {Name:embed-certs-412306 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:17:27.348439  373797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:17:27.348547  373797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:17:27.395884  373797 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:17:27.395917  373797 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:17:27.395924  373797 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:17:27.395929  373797 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:17:27.395933  373797 cri.go:89] found id: ""
	I1123 10:17:27.395979  373797 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:17:27.419845  373797 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:17:27Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:17:27.419963  373797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:17:27.439378  373797 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:17:27.439398  373797 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:17:27.439448  373797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:17:27.451084  373797 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:17:27.451946  373797 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-412306" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.452494  373797 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-412306" cluster setting kubeconfig missing "embed-certs-412306" context setting]
	I1123 10:17:27.453585  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.455654  373797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:17:27.467125  373797 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 10:17:27.467282  373797 kubeadm.go:602] duration metric: took 27.876451ms to restartPrimaryControlPlane
	I1123 10:17:27.467296  373797 kubeadm.go:403] duration metric: took 119.360738ms to StartCluster
	I1123 10:17:27.467315  373797 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.467483  373797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:27.469463  373797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:27.470000  373797 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:27.470115  373797 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:27.470204  373797 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412306"
	I1123 10:17:27.470221  373797 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412306"
	W1123 10:17:27.470228  373797 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:17:27.470273  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.470801  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471054  373797 addons.go:70] Setting dashboard=true in profile "embed-certs-412306"
	I1123 10:17:27.471072  373797 addons.go:239] Setting addon dashboard=true in "embed-certs-412306"
	W1123 10:17:27.471080  373797 addons.go:248] addon dashboard should already be in state true
	I1123 10:17:27.471255  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.471727  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.471889  373797 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412306"
	I1123 10:17:27.471907  373797 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412306"
	I1123 10:17:27.472219  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.472422  373797 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:27.474200  373797 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:27.475292  373797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:27.502438  373797 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:27.503728  373797 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.503754  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:27.503822  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.506369  373797 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412306"
	W1123 10:17:27.506905  373797 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:17:27.506973  373797 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:17:27.507482  373797 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:17:27.520746  373797 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:17:27.522141  373797 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:17:23.293716  371192 addons.go:530] duration metric: took 2.091867033s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:17:23.784999  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:23.789545  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:23.789569  371192 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:24.285244  371192 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:17:24.290382  371192 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:17:24.291908  371192 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:24.291943  371192 api_server.go:131] duration metric: took 1.007520894s to wait for apiserver health ...
	I1123 10:17:24.291958  371192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:24.295996  371192 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:24.296039  371192 system_pods.go:61] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.296051  371192 system_pods.go:61] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.296061  371192 system_pods.go:61] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.296079  371192 system_pods.go:61] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.296121  371192 system_pods.go:61] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.296136  371192 system_pods.go:61] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.296144  371192 system_pods.go:61] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.296159  371192 system_pods.go:61] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.296167  371192 system_pods.go:74] duration metric: took 4.202627ms to wait for pod list to return data ...
	I1123 10:17:24.296176  371192 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:24.298844  371192 default_sa.go:45] found service account: "default"
	I1123 10:17:24.298867  371192 default_sa.go:55] duration metric: took 2.684141ms for default service account to be created ...
	I1123 10:17:24.298878  371192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:24.301765  371192 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:24.301800  371192 system_pods.go:89] "coredns-66bc5c9577-krmwt" [39101b53-5254-41f3-bac9-c711e67dc551] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:24.301814  371192 system_pods.go:89] "etcd-no-preload-541522" [80258726-c8e2-4b27-962c-ee45e6948d2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:24.301825  371192 system_pods.go:89] "kindnet-9vppw" [3b98e7a4-34e9-46af-97a1-764b6ed92ec6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:17:24.301839  371192 system_pods.go:89] "kube-apiserver-no-preload-541522" [54bb8554-b2d7-4fc2-9d26-507e36b6d56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:24.301852  371192 system_pods.go:89] "kube-controller-manager-no-preload-541522" [b6d91917-0381-4558-9f2a-769f81cf9d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:24.301865  371192 system_pods.go:89] "kube-proxy-sllct" [c5b13417-4bca-4ec1-8e60-cf5016aa28ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:17:24.301877  371192 system_pods.go:89] "kube-scheduler-no-preload-541522" [31a3c55f-ac27-4800-af06-822af5bc6836] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:24.301893  371192 system_pods.go:89] "storage-provisioner" [40eb99ea-9515-431c-888b-81826014f8a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:17:24.301907  371192 system_pods.go:126] duration metric: took 3.021865ms to wait for k8s-apps to be running ...
	I1123 10:17:24.301921  371192 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:24.301973  371192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:24.318330  371192 system_svc.go:56] duration metric: took 16.399439ms WaitForService to wait for kubelet
	I1123 10:17:24.318363  371192 kubeadm.go:587] duration metric: took 3.1165169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:24.318385  371192 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:24.322994  371192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:24.323037  371192 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:24.323054  371192 node_conditions.go:105] duration metric: took 4.663725ms to run NodePressure ...
	I1123 10:17:24.323070  371192 start.go:242] waiting for startup goroutines ...
	I1123 10:17:24.323078  371192 start.go:247] waiting for cluster config update ...
	I1123 10:17:24.323103  371192 start.go:256] writing updated cluster config ...
	I1123 10:17:24.323457  371192 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:24.329879  371192 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:24.335776  371192 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:26.342596  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:26.275186  371315 out.go:252]   - Generating certificates and keys ...
	I1123 10:17:26.275352  371315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:17:26.275478  371315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:17:27.203820  371315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:17:27.842679  371315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1123 10:17:25.414040  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:27.423694  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:27.523106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:17:27.523125  373797 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:17:27.523187  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.544410  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.546884  373797 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.546911  373797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:27.547054  373797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:17:27.554028  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.584494  373797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:17:27.729896  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:17:27.729923  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:17:27.730389  373797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:27.748713  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:27.762305  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:17:27.762345  373797 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:17:27.773616  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:27.783643  373797 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:27.816165  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:17:27.816196  373797 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:17:27.853683  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:17:27.853715  373797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:17:27.895194  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:17:27.895222  373797 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:17:27.929349  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:17:27.929380  373797 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:17:27.952056  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:17:27.952129  373797 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:17:27.972228  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:17:27.972259  373797 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:17:27.995106  373797 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:27.995291  373797 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:17:28.022880  373797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:17:30.169450  373797 node_ready.go:49] node "embed-certs-412306" is "Ready"
	I1123 10:17:30.169488  373797 node_ready.go:38] duration metric: took 2.385791286s for node "embed-certs-412306" to be "Ready" ...
	I1123 10:17:30.169508  373797 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:30.169570  373797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:30.263935  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.515175318s)
	I1123 10:17:30.844237  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.070570716s)
	I1123 10:17:30.844367  373797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.821379534s)
	I1123 10:17:30.844403  373797 api_server.go:72] duration metric: took 3.371939039s to wait for apiserver process to appear ...
	I1123 10:17:30.844420  373797 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:30.844441  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:30.846035  373797 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-412306 addons enable metrics-server
	
	I1123 10:17:30.847355  373797 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:17:28.139930  371315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:17:28.712709  371315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:17:28.816265  371315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:17:28.816782  371315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.335727  371315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:17:29.335950  371315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-772252 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 10:17:29.643887  371315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:17:30.187228  371315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:17:30.521995  371315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:17:30.522113  371315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:17:30.784711  371315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:17:31.090260  371315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:17:31.313967  371315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:17:31.369836  371315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:17:31.747785  371315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:17:31.748584  371315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:17:31.753537  371315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 10:17:28.348145  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:30.843172  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:31.754796  371315 out.go:252]   - Booting up control plane ...
	I1123 10:17:31.754943  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:17:31.755055  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:17:31.755934  371315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:17:31.779002  371315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:17:31.779431  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:17:31.788946  371315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:17:31.789330  371315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:17:31.789392  371315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:17:31.939409  371315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:17:31.939585  371315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:17:29.940244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:32.465244  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	I1123 10:17:30.848716  373797 addons.go:530] duration metric: took 3.378601039s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:17:30.850138  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:30.850165  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.345352  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.353137  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:17:31.353176  373797 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:17:31.844492  373797 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 10:17:31.850813  373797 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 10:17:31.852077  373797 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:31.852127  373797 api_server.go:131] duration metric: took 1.007698573s to wait for apiserver health ...
	I1123 10:17:31.852139  373797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:31.855854  373797 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:31.855888  373797 system_pods.go:61] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.855899  373797 system_pods.go:61] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.855905  373797 system_pods.go:61] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.855914  373797 system_pods.go:61] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.855923  373797 system_pods.go:61] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.855929  373797 system_pods.go:61] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.855939  373797 system_pods.go:61] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.855944  373797 system_pods.go:61] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.855952  373797 system_pods.go:74] duration metric: took 3.805802ms to wait for pod list to return data ...
	I1123 10:17:31.855961  373797 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:31.858650  373797 default_sa.go:45] found service account: "default"
	I1123 10:17:31.858679  373797 default_sa.go:55] duration metric: took 2.711408ms for default service account to be created ...
	I1123 10:17:31.858690  373797 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:31.862049  373797 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:31.862079  373797 system_pods.go:89] "coredns-66bc5c9577-fxl7j" [4a7df323-64d0-4b3c-8f57-dfc5dd08eb0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:31.862105  373797 system_pods.go:89] "etcd-embed-certs-412306" [f8befdc6-c172-4569-9ca7-2d3ba827dbb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:17:31.862124  373797 system_pods.go:89] "kindnet-sm2h2" [1af4c3f2-8377-4a64-9499-502b9841a81d] Running
	I1123 10:17:31.862134  373797 system_pods.go:89] "kube-apiserver-embed-certs-412306" [0c456387-52ea-4271-af83-9b87f7ddc832] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:17:31.862144  373797 system_pods.go:89] "kube-controller-manager-embed-certs-412306" [cebfc94c-5d85-40f3-8099-b50676f43ef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:17:31.862150  373797 system_pods.go:89] "kube-proxy-2vnjq" [10c4fa48-37ca-4164-83ef-7ab034f844a9] Running
	I1123 10:17:31.862163  373797 system_pods.go:89] "kube-scheduler-embed-certs-412306" [9384ec5c-f592-4f4d-84ba-313b7eabf50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:17:31.862169  373797 system_pods.go:89] "storage-provisioner" [199ec01f-2a64-4666-af02-cd1ad7ae4cc2] Running
	I1123 10:17:31.862179  373797 system_pods.go:126] duration metric: took 3.483683ms to wait for k8s-apps to be running ...
	I1123 10:17:31.862188  373797 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:31.862236  373797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:31.880556  373797 system_svc.go:56] duration metric: took 18.357008ms WaitForService to wait for kubelet
	I1123 10:17:31.880607  373797 kubeadm.go:587] duration metric: took 4.408143491s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:31.880631  373797 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:31.884219  373797 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:31.884253  373797 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:31.884271  373797 node_conditions.go:105] duration metric: took 3.634037ms to run NodePressure ...
	I1123 10:17:31.884287  373797 start.go:242] waiting for startup goroutines ...
	I1123 10:17:31.884299  373797 start.go:247] waiting for cluster config update ...
	I1123 10:17:31.884319  373797 start.go:256] writing updated cluster config ...
	I1123 10:17:31.884624  373797 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:31.889946  373797 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:31.894375  373797 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:17:33.901572  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:33.523784  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:35.846995  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:32.941081  371315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001868854s
	I1123 10:17:32.945152  371315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:17:32.945305  371315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1123 10:17:32.945433  371315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:17:32.945515  371315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:17:35.861865  371315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.916644987s
	I1123 10:17:36.776622  371315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.831435695s
	I1123 10:17:38.447477  371315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502246404s
	I1123 10:17:38.458614  371315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:17:38.467767  371315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:17:38.476049  371315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:17:38.476376  371315 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-772252 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:17:38.484454  371315 kubeadm.go:319] [bootstrap-token] Using token: 7c739u.zwt0bal8xrfj12xj
	W1123 10:17:34.916285  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:37.413216  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:36.400976  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:38.912096  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:38.485658  371315 out.go:252]   - Configuring RBAC rules ...
	I1123 10:17:38.485833  371315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:17:38.489646  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:17:38.494425  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:17:38.496889  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:17:38.499031  371315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:17:38.501264  371315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:17:38.853661  371315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:17:39.273659  371315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:17:39.853812  371315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:17:39.855808  371315 kubeadm.go:319] 
	I1123 10:17:39.855908  371315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:17:39.855921  371315 kubeadm.go:319] 
	I1123 10:17:39.856050  371315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:17:39.856060  371315 kubeadm.go:319] 
	I1123 10:17:39.856130  371315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:17:39.856198  371315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:17:39.856261  371315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:17:39.856271  371315 kubeadm.go:319] 
	I1123 10:17:39.856335  371315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:17:39.856340  371315 kubeadm.go:319] 
	I1123 10:17:39.856394  371315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:17:39.856399  371315 kubeadm.go:319] 
	I1123 10:17:39.856459  371315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:17:39.856552  371315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:17:39.856635  371315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:17:39.856644  371315 kubeadm.go:319] 
	I1123 10:17:39.856747  371315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:17:39.856841  371315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:17:39.856850  371315 kubeadm.go:319] 
	I1123 10:17:39.856946  371315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857068  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:17:39.857106  371315 kubeadm.go:319] 	--control-plane 
	I1123 10:17:39.857112  371315 kubeadm.go:319] 
	I1123 10:17:39.857223  371315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:17:39.857231  371315 kubeadm.go:319] 
	I1123 10:17:39.857360  371315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 7c739u.zwt0bal8xrfj12xj \
	I1123 10:17:39.857522  371315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:17:39.861171  371315 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:17:39.861361  371315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:17:39.861384  371315 cni.go:84] Creating CNI manager for ""
	I1123 10:17:39.861392  371315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:17:39.863656  371315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 10:17:38.341179  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:40.341963  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:39.864757  371315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:17:39.869984  371315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:17:39.870008  371315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:17:39.886324  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:17:40.362280  371315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.362400  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-772252 minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=default-k8s-diff-port-772252 minikube.k8s.io/primary=true
	I1123 10:17:40.379214  371315 ops.go:34] apiserver oom_adj: -16
	I1123 10:17:40.464921  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:40.965405  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.465003  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:41.965821  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:42.464950  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 10:17:39.414230  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.914196  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:41.400282  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:43.899909  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:42.965639  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.465528  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:43.965079  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.464998  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:44.965763  371315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:17:45.037128  371315 kubeadm.go:1114] duration metric: took 4.67480031s to wait for elevateKubeSystemPrivileges
	I1123 10:17:45.037171  371315 kubeadm.go:403] duration metric: took 19.073779602s to StartCluster
	I1123 10:17:45.037193  371315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.037267  371315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:17:45.039120  371315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:17:45.039419  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:17:45.039444  371315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:17:45.039520  371315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:17:45.039628  371315 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039656  371315 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772252"
	I1123 10:17:45.039686  371315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772252"
	I1123 10:17:45.039661  371315 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.039720  371315 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:45.039784  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.040159  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.040405  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.041405  371315 out.go:179] * Verifying Kubernetes components...
	I1123 10:17:45.042675  371315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:17:45.064542  371315 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772252"
	I1123 10:17:45.064587  371315 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:17:45.064919  371315 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:17:45.065873  371315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:17:45.067076  371315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.067111  371315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:17:45.067169  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.085477  371315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.085507  371315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:17:45.086250  371315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:17:45.092224  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.114171  371315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:17:45.126365  371315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:17:45.189744  371315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:17:45.218033  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:17:45.235955  371315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:17:45.315901  371315 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 10:17:45.317142  371315 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:45.535405  371315 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:17:42.843988  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:45.342896  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:17:45.536493  371315 addons.go:530] duration metric: took 496.970486ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:17:45.820948  371315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-772252" context rescaled to 1 replicas
	W1123 10:17:47.319425  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:43.914556  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:46.414198  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:45.900010  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.900260  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:47.841815  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:50.341880  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:49.319741  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:51.320336  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:48.913341  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:51.412869  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:53.413536  366730 pod_ready.go:104] pod "coredns-5dd5756b68-fsbfv" is not "Ready", error: <nil>
	W1123 10:17:50.400011  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:52.900077  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:17:53.913334  366730 pod_ready.go:94] pod "coredns-5dd5756b68-fsbfv" is "Ready"
	I1123 10:17:53.913363  366730 pod_ready.go:86] duration metric: took 39.505598501s for pod "coredns-5dd5756b68-fsbfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.916455  366730 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.920979  366730 pod_ready.go:94] pod "etcd-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.921004  366730 pod_ready.go:86] duration metric: took 4.524758ms for pod "etcd-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.923876  366730 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.928363  366730 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-990757" is "Ready"
	I1123 10:17:53.928389  366730 pod_ready.go:86] duration metric: took 4.49134ms for pod "kube-apiserver-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:53.931268  366730 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.111689  366730 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-990757" is "Ready"
	I1123 10:17:54.111728  366730 pod_ready.go:86] duration metric: took 180.43869ms for pod "kube-controller-manager-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.312490  366730 pod_ready.go:83] waiting for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.711645  366730 pod_ready.go:94] pod "kube-proxy-99g4b" is "Ready"
	I1123 10:17:54.711677  366730 pod_ready.go:86] duration metric: took 399.161367ms for pod "kube-proxy-99g4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:54.912461  366730 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311759  366730 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-990757" is "Ready"
	I1123 10:17:55.311784  366730 pod_ready.go:86] duration metric: took 399.295747ms for pod "kube-scheduler-old-k8s-version-990757" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:55.311813  366730 pod_ready.go:40] duration metric: took 40.908845551s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:55.356075  366730 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 10:17:55.357834  366730 out.go:203] 
	W1123 10:17:55.359077  366730 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:17:55.360393  366730 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:17:55.361705  366730 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-990757" cluster and "default" namespace by default
	W1123 10:17:52.841432  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:55.341775  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:57.341870  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	W1123 10:17:53.320896  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	W1123 10:17:55.820856  371315 node_ready.go:57] node "default-k8s-diff-port-772252" has "Ready":"False" status (will retry)
	I1123 10:17:56.320034  371315 node_ready.go:49] node "default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:56.320062  371315 node_ready.go:38] duration metric: took 11.002894749s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:17:56.320077  371315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:17:56.320168  371315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:17:56.333026  371315 api_server.go:72] duration metric: took 11.293527033s to wait for apiserver process to appear ...
	I1123 10:17:56.333046  371315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:17:56.333064  371315 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1123 10:17:56.337320  371315 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1123 10:17:56.338383  371315 api_server.go:141] control plane version: v1.34.1
	I1123 10:17:56.338411  371315 api_server.go:131] duration metric: took 5.357543ms to wait for apiserver health ...
	I1123 10:17:56.338423  371315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:17:56.342472  371315 system_pods.go:59] 8 kube-system pods found
	I1123 10:17:56.342509  371315 system_pods.go:61] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.342517  371315 system_pods.go:61] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.342525  371315 system_pods.go:61] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.342531  371315 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.342538  371315 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.342542  371315 system_pods.go:61] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.342549  371315 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.342554  371315 system_pods.go:61] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.342565  371315 system_pods.go:74] duration metric: took 4.133412ms to wait for pod list to return data ...
	I1123 10:17:56.342577  371315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:17:56.344836  371315 default_sa.go:45] found service account: "default"
	I1123 10:17:56.344858  371315 default_sa.go:55] duration metric: took 2.273737ms for default service account to be created ...
	I1123 10:17:56.344868  371315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:17:56.347696  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.347728  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.347736  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.347744  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.347754  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.347760  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.347768  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.347773  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.347778  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.347800  371315 retry.go:31] will retry after 302.24178ms: missing components: kube-dns
	I1123 10:17:56.653773  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.653806  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.653815  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.653820  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.653830  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.653835  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.653840  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.653846  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.653851  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.653871  371315 retry.go:31] will retry after 265.267308ms: missing components: kube-dns
	I1123 10:17:56.923296  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:56.923348  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:17:56.923356  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:56.923382  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:56.923389  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:56.923401  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:56.923407  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:56.923412  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:56.923417  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:56.923434  371315 retry.go:31] will retry after 380.263968ms: missing components: kube-dns
	I1123 10:17:57.307510  371315 system_pods.go:86] 8 kube-system pods found
	I1123 10:17:57.307546  371315 system_pods.go:89] "coredns-66bc5c9577-c5c4c" [b393f50c-f83f-45b4-8c27-56971c3279c0] Running
	I1123 10:17:57.307554  371315 system_pods.go:89] "etcd-default-k8s-diff-port-772252" [de179811-197e-4e4b-9933-f051ca479011] Running
	I1123 10:17:57.307562  371315 system_pods.go:89] "kindnet-4dnjf" [3258335f-0700-4a89-8857-c10cfc091182] Running
	I1123 10:17:57.307568  371315 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772252" [080999dc-1510-4086-aa20-f7975eb1cb69] Running
	I1123 10:17:57.307572  371315 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772252" [215dd3a6-702c-4aaf-9299-6d5de9eb21b5] Running
	I1123 10:17:57.307577  371315 system_pods.go:89] "kube-proxy-xfghg" [5cf715f4-c1ca-4938-a213-7095cb2c7823] Running
	I1123 10:17:57.307581  371315 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772252" [c020136f-1728-4423-b34e-932682df1f89] Running
	I1123 10:17:57.307586  371315 system_pods.go:89] "storage-provisioner" [9d727e76-94f8-4344-820c-f2d4e83f5d87] Running
	I1123 10:17:57.307596  371315 system_pods.go:126] duration metric: took 962.72072ms to wait for k8s-apps to be running ...
	I1123 10:17:57.307606  371315 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:17:57.307658  371315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:17:57.320972  371315 system_svc.go:56] duration metric: took 13.353924ms WaitForService to wait for kubelet
	I1123 10:17:57.321004  371315 kubeadm.go:587] duration metric: took 12.281511348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:17:57.321022  371315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:17:57.323660  371315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:17:57.323692  371315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:17:57.323712  371315 node_conditions.go:105] duration metric: took 2.684637ms to run NodePressure ...
	I1123 10:17:57.323726  371315 start.go:242] waiting for startup goroutines ...
	I1123 10:17:57.323742  371315 start.go:247] waiting for cluster config update ...
	I1123 10:17:57.323759  371315 start.go:256] writing updated cluster config ...
	I1123 10:17:57.324067  371315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:17:57.328141  371315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:57.331589  371315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.335257  371315 pod_ready.go:94] pod "coredns-66bc5c9577-c5c4c" is "Ready"
	I1123 10:17:57.335285  371315 pod_ready.go:86] duration metric: took 3.674367ms for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.337137  371315 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.341306  371315 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.341329  371315 pod_ready.go:86] duration metric: took 4.173911ms for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.343139  371315 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.346731  371315 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.346750  371315 pod_ready.go:86] duration metric: took 3.589943ms for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.348459  371315 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.732573  371315 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:57.732607  371315 pod_ready.go:86] duration metric: took 384.128293ms for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:57.932984  371315 pod_ready.go:83] waiting for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.331761  371315 pod_ready.go:94] pod "kube-proxy-xfghg" is "Ready"
	I1123 10:17:58.331788  371315 pod_ready.go:86] duration metric: took 398.77791ms for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.533376  371315 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932675  371315 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772252" is "Ready"
	I1123 10:17:58.932705  371315 pod_ready.go:86] duration metric: took 399.30371ms for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:17:58.932717  371315 pod_ready.go:40] duration metric: took 1.604548656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:17:58.976709  371315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:17:58.978487  371315 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772252" cluster and "default" namespace by default
	W1123 10:17:55.399817  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:57.899557  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:17:59.840864  371192 pod_ready.go:104] pod "coredns-66bc5c9577-krmwt" is not "Ready", error: <nil>
	I1123 10:18:00.341361  371192 pod_ready.go:94] pod "coredns-66bc5c9577-krmwt" is "Ready"
	I1123 10:18:00.341391  371192 pod_ready.go:86] duration metric: took 36.00558292s for pod "coredns-66bc5c9577-krmwt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.344015  371192 pod_ready.go:83] waiting for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.348659  371192 pod_ready.go:94] pod "etcd-no-preload-541522" is "Ready"
	I1123 10:18:00.348689  371192 pod_ready.go:86] duration metric: took 4.650364ms for pod "etcd-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.351238  371192 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.354817  371192 pod_ready.go:94] pod "kube-apiserver-no-preload-541522" is "Ready"
	I1123 10:18:00.354840  371192 pod_ready.go:86] duration metric: took 3.5776ms for pod "kube-apiserver-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.356850  371192 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.540127  371192 pod_ready.go:94] pod "kube-controller-manager-no-preload-541522" is "Ready"
	I1123 10:18:00.540160  371192 pod_ready.go:86] duration metric: took 183.289677ms for pod "kube-controller-manager-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:00.740192  371192 pod_ready.go:83] waiting for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.139411  371192 pod_ready.go:94] pod "kube-proxy-sllct" is "Ready"
	I1123 10:18:01.139439  371192 pod_ready.go:86] duration metric: took 399.218147ms for pod "kube-proxy-sllct" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.340436  371192 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740259  371192 pod_ready.go:94] pod "kube-scheduler-no-preload-541522" is "Ready"
	I1123 10:18:01.740295  371192 pod_ready.go:86] duration metric: took 399.829885ms for pod "kube-scheduler-no-preload-541522" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:01.740307  371192 pod_ready.go:40] duration metric: took 37.410392677s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:01.788412  371192 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:01.791159  371192 out.go:179] * Done! kubectl is now configured to use "no-preload-541522" cluster and "default" namespace by default
	W1123 10:18:00.399534  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	W1123 10:18:02.400234  373797 pod_ready.go:104] pod "coredns-66bc5c9577-fxl7j" is not "Ready", error: <nil>
	I1123 10:18:02.899900  373797 pod_ready.go:94] pod "coredns-66bc5c9577-fxl7j" is "Ready"
	I1123 10:18:02.899931  373797 pod_ready.go:86] duration metric: took 31.005531566s for pod "coredns-66bc5c9577-fxl7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.902103  373797 pod_ready.go:83] waiting for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.905655  373797 pod_ready.go:94] pod "etcd-embed-certs-412306" is "Ready"
	I1123 10:18:02.905688  373797 pod_ready.go:86] duration metric: took 3.561728ms for pod "etcd-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.907483  373797 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.911179  373797 pod_ready.go:94] pod "kube-apiserver-embed-certs-412306" is "Ready"
	I1123 10:18:02.911205  373797 pod_ready.go:86] duration metric: took 3.701799ms for pod "kube-apiserver-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:02.912993  373797 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.099021  373797 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412306" is "Ready"
	I1123 10:18:03.099054  373797 pod_ready.go:86] duration metric: took 186.04071ms for pod "kube-controller-manager-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.298482  373797 pod_ready.go:83] waiting for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.697866  373797 pod_ready.go:94] pod "kube-proxy-2vnjq" is "Ready"
	I1123 10:18:03.697900  373797 pod_ready.go:86] duration metric: took 399.390791ms for pod "kube-proxy-2vnjq" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:03.898175  373797 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298226  373797 pod_ready.go:94] pod "kube-scheduler-embed-certs-412306" is "Ready"
	I1123 10:18:04.298262  373797 pod_ready.go:86] duration metric: took 400.039787ms for pod "kube-scheduler-embed-certs-412306" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:18:04.298279  373797 pod_ready.go:40] duration metric: took 32.408301003s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:18:04.344316  373797 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:04.346173  373797 out.go:179] * Done! kubectl is now configured to use "embed-certs-412306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:17:56 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:56.218862223Z" level=info msg="Started container" PID=1863 containerID=fa8ae7896b487f214e0ee18cd7455c509712a89da30321759164e8fac353f7c3 description=kube-system/storage-provisioner/storage-provisioner id=8435354b-3318-4e48-910b-81d15c35737a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b57e361939e1249819bf94d626098e56989ad8537cd0a7bf9a17b93cade11782
	Nov 23 10:17:56 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:56.22056103Z" level=info msg="Started container" PID=1866 containerID=b13027790053dd0e5e6527fd3a648fcd681857a87f590c2b991e95825e0f90a6 description=kube-system/coredns-66bc5c9577-c5c4c/coredns id=55a1da37-ab6c-48e4-964e-bcec822c7b61 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccc107278a731435e19a41cd1b8718c60459f5dba988a924d948bfb10389d93e
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.443816074Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4fab8298-a434-448a-af7a-f705b36c4d14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.443878625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.44868731Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6967193fa1834761ee13c1e709f596160816d2b0723f9c60488cfe551f2eda45 UID:c037ffcf-7b8b-4442-9c4e-d188a4de7b08 NetNS:/var/run/netns/cad7ab2a-a21e-4c8a-b867-67b8507e241a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00047a360}] Aliases:map[]}"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.448717859Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.458331609Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6967193fa1834761ee13c1e709f596160816d2b0723f9c60488cfe551f2eda45 UID:c037ffcf-7b8b-4442-9c4e-d188a4de7b08 NetNS:/var/run/netns/cad7ab2a-a21e-4c8a-b867-67b8507e241a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00047a360}] Aliases:map[]}"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.458468926Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.459204718Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.460007452Z" level=info msg="Ran pod sandbox 6967193fa1834761ee13c1e709f596160816d2b0723f9c60488cfe551f2eda45 with infra container: default/busybox/POD" id=4fab8298-a434-448a-af7a-f705b36c4d14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.46121038Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=acf6611b-039d-44c6-bbfe-61bbdc0d6c1e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.46134179Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=acf6611b-039d-44c6-bbfe-61bbdc0d6c1e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.461385649Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=acf6611b-039d-44c6-bbfe-61bbdc0d6c1e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.462165298Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=27d53c35-6c92-4854-9876-e5ee8b5074c4 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:17:59 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:17:59.463846971Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.623177846Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=27d53c35-6c92-4854-9876-e5ee8b5074c4 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.62398561Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c8dcda1-cb33-445c-bc97-dd1e6a49cd95 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.625559291Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=300e3f62-3519-4b3f-89d5-e9209ebf9a0c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.629333562Z" level=info msg="Creating container: default/busybox/busybox" id=66c863dc-e9c2-40b7-9797-f71f00c26d77 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.629462695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.634036225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.635005648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.668300778Z" level=info msg="Created container 4e1daa2888100973773b78bd2cc025f62bc985cd81475b0b71b9f2b451746b43: default/busybox/busybox" id=66c863dc-e9c2-40b7-9797-f71f00c26d77 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.668966321Z" level=info msg="Starting container: 4e1daa2888100973773b78bd2cc025f62bc985cd81475b0b71b9f2b451746b43" id=69a5c8d6-6f3c-4a25-8739-79255db4dfb3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:01 default-k8s-diff-port-772252 crio[774]: time="2025-11-23T10:18:01.670696682Z" level=info msg="Started container" PID=1945 containerID=4e1daa2888100973773b78bd2cc025f62bc985cd81475b0b71b9f2b451746b43 description=default/busybox/busybox id=69a5c8d6-6f3c-4a25-8739-79255db4dfb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6967193fa1834761ee13c1e709f596160816d2b0723f9c60488cfe551f2eda45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4e1daa2888100       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6967193fa1834       busybox                                                default
	b13027790053d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   ccc107278a731       coredns-66bc5c9577-c5c4c                               kube-system
	fa8ae7896b487       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   b57e361939e12       storage-provisioner                                    kube-system
	7647678a08077       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   a371d33e3c893       kindnet-4dnjf                                          kube-system
	77de72c36d1f6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   e0aa26d194751       kube-proxy-xfghg                                       kube-system
	4a1d587883eda       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   7b886b8c76d07       kube-scheduler-default-k8s-diff-port-772252            kube-system
	6a89ab4d18797       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   cb2d66f8521a2       etcd-default-k8s-diff-port-772252                      kube-system
	51597ff2b68a7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   5d9f87625e5c3       kube-controller-manager-default-k8s-diff-port-772252   kube-system
	2de34614a6723       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   eef94d911dda4       kube-apiserver-default-k8s-diff-port-772252            kube-system
	
	
	==> coredns [b13027790053dd0e5e6527fd3a648fcd681857a87f590c2b991e95825e0f90a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33472 - 39729 "HINFO IN 8836966644840364263.6977928144449510887. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027749094s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-772252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-772252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772252
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:17:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:17:55 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:17:55 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:17:55 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:17:55 +0000   Sun, 23 Nov 2025 10:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-772252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                752b5ee7-1a37-4c91-8868-54a0bdb64fb2
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-c5c4c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-772252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-4dnjf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-xfghg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node default-k8s-diff-port-772252 event: Registered Node default-k8s-diff-port-772252 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-772252 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [6a89ab4d187974ecea9f3cbe3b788dc8ac558c700c57e3229ffe66d461aaedde] <==
	{"level":"warn","ts":"2025-11-23T10:17:35.940353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:35.949682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:35.960119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:35.970182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:35.989889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.001300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.009823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.021570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.032664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.047570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.056233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.064521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.072324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.080733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.098252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.107198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.117102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.125196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.133295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.142266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.150172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.165904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.173654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.181507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:36.249157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38540","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:09 up  3:00,  0 user,  load average: 4.58, 5.00, 2.98
	Linux default-k8s-diff-port-772252 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7647678a0807714b1ab00bc2ceb07e1bbd1c28710761e522e130f87b01d4af7f] <==
	I1123 10:17:45.504878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:45.505156       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 10:17:45.505324       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:45.505351       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:45.505366       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:45.805398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:45.805471       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:45.805486       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:45.805654       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:46.105622       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:46.105664       1 metrics.go:72] Registering metrics
	I1123 10:17:46.105747       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:55.712183       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 10:17:55.712244       1 main.go:301] handling current node
	I1123 10:18:05.711291       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 10:18:05.711333       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2de34614a6723eb60f766b6b9af9b7419820a7180ceaa3364d8d5830ade9e8b4] <==
	I1123 10:17:36.823928       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:17:36.835976       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:17:36.836245       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:17:36.836268       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:17:36.836277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:17:36.836285       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:17:37.013570       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:17:37.715882       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:17:37.720560       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:17:37.720579       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:38.160247       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:38.193063       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:38.319017       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:17:38.324773       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 10:17:38.325707       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:17:38.329672       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:17:38.749690       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:17:39.261033       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:17:39.272510       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:17:39.280265       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:17:44.503045       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:44.510374       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:44.566177       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:17:44.667387       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 10:18:08.232009       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38052: use of closed network connection
	
	
	==> kube-controller-manager [51597ff2b68a7ca7de861b514b1165ecf3ac6ae366ed862f6df7acb8fb79c272] <==
	I1123 10:17:43.749231       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:17:43.749237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:17:43.749412       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:17:43.749599       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:17:43.750768       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:17:43.750868       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 10:17:43.750867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:17:43.752910       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:17:43.754130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:17:43.754220       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:17:43.754236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:17:43.754287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:17:43.754292       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:17:43.754351       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:17:43.754359       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:17:43.754364       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:17:43.755572       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:17:43.757832       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:43.760125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:17:43.761356       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:17:43.764738       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-772252" podCIDRs=["10.244.0.0/24"]
	I1123 10:17:43.767732       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:17:43.770032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:17:43.774297       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:58.702261       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [77de72c36d1f6298ad82429a8d2af55b608e0816d770683dd81d67e10e4e3fc6] <==
	I1123 10:17:45.281040       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:45.354410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:45.455157       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:45.455199       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 10:17:45.455316       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:45.474072       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:45.474191       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:45.480293       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:45.480635       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:45.480666       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:45.482204       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:45.482282       1 config.go:200] "Starting service config controller"
	I1123 10:17:45.482313       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:45.482329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:45.482316       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:45.482295       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:45.482394       1 config.go:309] "Starting node config controller"
	I1123 10:17:45.482827       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:45.482849       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:45.582905       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:17:45.582925       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:45.582951       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4a1d587883eda0d84e0c5a59af94b135467ab543b666f84fe78878b4df6c58f9] <==
	E1123 10:17:36.770512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:17:36.770556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:17:36.770602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:17:36.771150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:17:36.772030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:17:36.774065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:17:36.774200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:17:36.774469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:17:36.774584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:17:36.774631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:36.774732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:17:36.775112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:17:36.776410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:17:36.776487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:17:36.776437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:17:36.776621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:17:37.596451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:17:37.652030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:17:37.732479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:17:37.762902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:17:37.843669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:17:37.849731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:17:37.870201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:17:37.907378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 10:17:38.265597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:17:40 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:40.365977    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-772252" podStartSLOduration=1.365951988 podStartE2EDuration="1.365951988s" podCreationTimestamp="2025-11-23 10:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:40.356013913 +0000 UTC m=+1.300268586" watchObservedRunningTime="2025-11-23 10:17:40.365951988 +0000 UTC m=+1.310206639"
	Nov 23 10:17:40 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:40.366237    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-772252" podStartSLOduration=1.366221038 podStartE2EDuration="1.366221038s" podCreationTimestamp="2025-11-23 10:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:40.365615833 +0000 UTC m=+1.309870513" watchObservedRunningTime="2025-11-23 10:17:40.366221038 +0000 UTC m=+1.310475721"
	Nov 23 10:17:40 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:40.390434    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-772252" podStartSLOduration=1.3904137429999999 podStartE2EDuration="1.390413743s" podCreationTimestamp="2025-11-23 10:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:40.376954148 +0000 UTC m=+1.321208821" watchObservedRunningTime="2025-11-23 10:17:40.390413743 +0000 UTC m=+1.334668450"
	Nov 23 10:17:43 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:43.794078    1336 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:17:43 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:43.794898    1336 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884078    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3258335f-0700-4a89-8857-c10cfc091182-lib-modules\") pod \"kindnet-4dnjf\" (UID: \"3258335f-0700-4a89-8857-c10cfc091182\") " pod="kube-system/kindnet-4dnjf"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884145    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299zl\" (UniqueName: \"kubernetes.io/projected/3258335f-0700-4a89-8857-c10cfc091182-kube-api-access-299zl\") pod \"kindnet-4dnjf\" (UID: \"3258335f-0700-4a89-8857-c10cfc091182\") " pod="kube-system/kindnet-4dnjf"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884195    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cf715f4-c1ca-4938-a213-7095cb2c7823-xtables-lock\") pod \"kube-proxy-xfghg\" (UID: \"5cf715f4-c1ca-4938-a213-7095cb2c7823\") " pod="kube-system/kube-proxy-xfghg"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884209    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cf715f4-c1ca-4938-a213-7095cb2c7823-lib-modules\") pod \"kube-proxy-xfghg\" (UID: \"5cf715f4-c1ca-4938-a213-7095cb2c7823\") " pod="kube-system/kube-proxy-xfghg"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884223    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmr5\" (UniqueName: \"kubernetes.io/projected/5cf715f4-c1ca-4938-a213-7095cb2c7823-kube-api-access-csmr5\") pod \"kube-proxy-xfghg\" (UID: \"5cf715f4-c1ca-4938-a213-7095cb2c7823\") " pod="kube-system/kube-proxy-xfghg"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884284    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3258335f-0700-4a89-8857-c10cfc091182-cni-cfg\") pod \"kindnet-4dnjf\" (UID: \"3258335f-0700-4a89-8857-c10cfc091182\") " pod="kube-system/kindnet-4dnjf"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884329    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cf715f4-c1ca-4938-a213-7095cb2c7823-kube-proxy\") pod \"kube-proxy-xfghg\" (UID: \"5cf715f4-c1ca-4938-a213-7095cb2c7823\") " pod="kube-system/kube-proxy-xfghg"
	Nov 23 10:17:44 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:44.884343    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3258335f-0700-4a89-8857-c10cfc091182-xtables-lock\") pod \"kindnet-4dnjf\" (UID: \"3258335f-0700-4a89-8857-c10cfc091182\") " pod="kube-system/kindnet-4dnjf"
	Nov 23 10:17:46 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:46.215032    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4dnjf" podStartSLOduration=2.214989042 podStartE2EDuration="2.214989042s" podCreationTimestamp="2025-11-23 10:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:46.214977042 +0000 UTC m=+7.159231714" watchObservedRunningTime="2025-11-23 10:17:46.214989042 +0000 UTC m=+7.159243714"
	Nov 23 10:17:46 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:46.225590    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xfghg" podStartSLOduration=2.225568251 podStartE2EDuration="2.225568251s" podCreationTimestamp="2025-11-23 10:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:46.225493708 +0000 UTC m=+7.169748382" watchObservedRunningTime="2025-11-23 10:17:46.225568251 +0000 UTC m=+7.169822923"
	Nov 23 10:17:55 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:55.841936    1336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:17:55 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:55.969481    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9d727e76-94f8-4344-820c-f2d4e83f5d87-tmp\") pod \"storage-provisioner\" (UID: \"9d727e76-94f8-4344-820c-f2d4e83f5d87\") " pod="kube-system/storage-provisioner"
	Nov 23 10:17:55 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:55.969525    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8krnp\" (UniqueName: \"kubernetes.io/projected/9d727e76-94f8-4344-820c-f2d4e83f5d87-kube-api-access-8krnp\") pod \"storage-provisioner\" (UID: \"9d727e76-94f8-4344-820c-f2d4e83f5d87\") " pod="kube-system/storage-provisioner"
	Nov 23 10:17:55 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:55.969557    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b393f50c-f83f-45b4-8c27-56971c3279c0-config-volume\") pod \"coredns-66bc5c9577-c5c4c\" (UID: \"b393f50c-f83f-45b4-8c27-56971c3279c0\") " pod="kube-system/coredns-66bc5c9577-c5c4c"
	Nov 23 10:17:55 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:55.969571    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljlwj\" (UniqueName: \"kubernetes.io/projected/b393f50c-f83f-45b4-8c27-56971c3279c0-kube-api-access-ljlwj\") pod \"coredns-66bc5c9577-c5c4c\" (UID: \"b393f50c-f83f-45b4-8c27-56971c3279c0\") " pod="kube-system/coredns-66bc5c9577-c5c4c"
	Nov 23 10:17:56 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:56.240267    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.240244939 podStartE2EDuration="11.240244939s" podCreationTimestamp="2025-11-23 10:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:56.239800198 +0000 UTC m=+17.184054870" watchObservedRunningTime="2025-11-23 10:17:56.240244939 +0000 UTC m=+17.184499609"
	Nov 23 10:17:57 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:57.241272    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c5c4c" podStartSLOduration=13.241251953 podStartE2EDuration="13.241251953s" podCreationTimestamp="2025-11-23 10:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:17:57.240532732 +0000 UTC m=+18.184787403" watchObservedRunningTime="2025-11-23 10:17:57.241251953 +0000 UTC m=+18.185506626"
	Nov 23 10:17:59 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:17:59.189326    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f896x\" (UniqueName: \"kubernetes.io/projected/c037ffcf-7b8b-4442-9c4e-d188a4de7b08-kube-api-access-f896x\") pod \"busybox\" (UID: \"c037ffcf-7b8b-4442-9c4e-d188a4de7b08\") " pod="default/busybox"
	Nov 23 10:18:02 default-k8s-diff-port-772252 kubelet[1336]: I1123 10:18:02.255118    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.091723878 podStartE2EDuration="3.255052141s" podCreationTimestamp="2025-11-23 10:17:59 +0000 UTC" firstStartedPulling="2025-11-23 10:17:59.461688816 +0000 UTC m=+20.405943470" lastFinishedPulling="2025-11-23 10:18:01.625017065 +0000 UTC m=+22.569271733" observedRunningTime="2025-11-23 10:18:02.255055217 +0000 UTC m=+23.199309887" watchObservedRunningTime="2025-11-23 10:18:02.255052141 +0000 UTC m=+23.199306812"
	Nov 23 10:18:08 default-k8s-diff-port-772252 kubelet[1336]: E1123 10:18:08.232023    1336 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44934->127.0.0.1:42189: write tcp 127.0.0.1:44934->127.0.0.1:42189: write: broken pipe
	
	
	==> storage-provisioner [fa8ae7896b487f214e0ee18cd7455c509712a89da30321759164e8fac353f7c3] <==
	I1123 10:17:56.233444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:17:56.241860       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:17:56.242017       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:17:56.247472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:17:56.253563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:17:56.253842       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:17:56.253924       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed91b9a4-76da-498a-b1ac-8ef14ef3f49c", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772252_c733ac8a-590a-4267-959c-43e5a5403ef4 became leader
	I1123 10:17:56.254046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_c733ac8a-590a-4267-959c-43e5a5403ef4!
	W1123 10:17:56.257137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:17:56.262972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:17:56.355134       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_c733ac8a-590a-4267-959c-43e5a5403ef4!
	W1123 10:17:58.265853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:17:58.269924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:00.273537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:00.278329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:02.281598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:02.285822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:04.289613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:04.294927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:06.298767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:06.302920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:08.306128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:08.312083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-541522 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-541522 --alsologtostderr -v=1: exit status 80 (2.291920057s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-541522 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:18:13.570362  383371 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:13.570629  383371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:13.570640  383371 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:13.570644  383371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:13.570845  383371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:13.571069  383371 out.go:368] Setting JSON to false
	I1123 10:18:13.571104  383371 mustload.go:66] Loading cluster: no-preload-541522
	I1123 10:18:13.571491  383371 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:13.571902  383371 cli_runner.go:164] Run: docker container inspect no-preload-541522 --format={{.State.Status}}
	I1123 10:18:13.589639  383371 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:18:13.589969  383371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:13.651911  383371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-23 10:18:13.641769478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:13.652778  383371 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-541522 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:18:13.654822  383371 out.go:179] * Pausing node no-preload-541522 ... 
	I1123 10:18:13.655950  383371 host.go:66] Checking if "no-preload-541522" exists ...
	I1123 10:18:13.656366  383371 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:13.656431  383371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-541522
	I1123 10:18:13.674505  383371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/no-preload-541522/id_rsa Username:docker}
	I1123 10:18:13.775836  383371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:13.803834  383371 pause.go:52] kubelet running: true
	I1123 10:18:13.803930  383371 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:13.970951  383371 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:13.971052  383371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:14.042301  383371 cri.go:89] found id: "0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68"
	I1123 10:18:14.042325  383371 cri.go:89] found id: "22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1"
	I1123 10:18:14.042329  383371 cri.go:89] found id: "c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	I1123 10:18:14.042332  383371 cri.go:89] found id: "0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb"
	I1123 10:18:14.042335  383371 cri.go:89] found id: "552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24"
	I1123 10:18:14.042339  383371 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:18:14.042342  383371 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:18:14.042345  383371 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:18:14.042348  383371 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:18:14.042364  383371 cri.go:89] found id: "1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	I1123 10:18:14.042367  383371 cri.go:89] found id: "8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3"
	I1123 10:18:14.042370  383371 cri.go:89] found id: ""
	I1123 10:18:14.042422  383371 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:14.054665  383371 retry.go:31] will retry after 165.123387ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:14Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:14.220055  383371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:14.233810  383371 pause.go:52] kubelet running: false
	I1123 10:18:14.233865  383371 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:14.371560  383371 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:14.371633  383371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:14.441884  383371 cri.go:89] found id: "0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68"
	I1123 10:18:14.441912  383371 cri.go:89] found id: "22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1"
	I1123 10:18:14.441918  383371 cri.go:89] found id: "c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	I1123 10:18:14.441922  383371 cri.go:89] found id: "0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb"
	I1123 10:18:14.441927  383371 cri.go:89] found id: "552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24"
	I1123 10:18:14.441931  383371 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:18:14.441935  383371 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:18:14.441946  383371 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:18:14.441951  383371 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:18:14.441960  383371 cri.go:89] found id: "1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	I1123 10:18:14.441978  383371 cri.go:89] found id: "8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3"
	I1123 10:18:14.441985  383371 cri.go:89] found id: ""
	I1123 10:18:14.442034  383371 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:14.456388  383371 retry.go:31] will retry after 291.707343ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:14Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:14.748970  383371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:14.762127  383371 pause.go:52] kubelet running: false
	I1123 10:18:14.762189  383371 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:14.917041  383371 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:14.917135  383371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:14.990851  383371 cri.go:89] found id: "0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68"
	I1123 10:18:14.990880  383371 cri.go:89] found id: "22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1"
	I1123 10:18:14.990888  383371 cri.go:89] found id: "c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	I1123 10:18:14.990893  383371 cri.go:89] found id: "0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb"
	I1123 10:18:14.990897  383371 cri.go:89] found id: "552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24"
	I1123 10:18:14.990902  383371 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:18:14.990906  383371 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:18:14.990911  383371 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:18:14.990926  383371 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:18:14.990939  383371 cri.go:89] found id: "1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	I1123 10:18:14.990944  383371 cri.go:89] found id: "8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3"
	I1123 10:18:14.990948  383371 cri.go:89] found id: ""
	I1123 10:18:14.990994  383371 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:15.002776  383371 retry.go:31] will retry after 530.422184ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:15Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:15.534365  383371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:15.547850  383371 pause.go:52] kubelet running: false
	I1123 10:18:15.547920  383371 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:15.701838  383371 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:15.701910  383371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:15.772862  383371 cri.go:89] found id: "0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68"
	I1123 10:18:15.772892  383371 cri.go:89] found id: "22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1"
	I1123 10:18:15.772898  383371 cri.go:89] found id: "c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	I1123 10:18:15.772904  383371 cri.go:89] found id: "0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb"
	I1123 10:18:15.772908  383371 cri.go:89] found id: "552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24"
	I1123 10:18:15.772914  383371 cri.go:89] found id: "3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574"
	I1123 10:18:15.772918  383371 cri.go:89] found id: "3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202"
	I1123 10:18:15.772922  383371 cri.go:89] found id: "454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e"
	I1123 10:18:15.772927  383371 cri.go:89] found id: "a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c"
	I1123 10:18:15.772952  383371 cri.go:89] found id: "1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	I1123 10:18:15.772955  383371 cri.go:89] found id: "8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3"
	I1123 10:18:15.772957  383371 cri.go:89] found id: ""
	I1123 10:18:15.772996  383371 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:15.789049  383371 out.go:203] 
	W1123 10:18:15.790263  383371 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:18:15.790292  383371 out.go:285] * 
	* 
	W1123 10:18:15.797305  383371 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:18:15.798534  383371 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-541522 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-541522
helpers_test.go:243: (dbg) docker inspect no-preload-541522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	        "Created": "2025-11-23T10:15:44.853738209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 371572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:13.048471684Z",
	            "FinishedAt": "2025-11-23T10:17:11.518342413Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hosts",
	        "LogPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba-json.log",
	        "Name": "/no-preload-541522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-541522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-541522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	                "LowerDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-541522",
	                "Source": "/var/lib/docker/volumes/no-preload-541522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-541522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-541522",
	                "name.minikube.sigs.k8s.io": "no-preload-541522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6708dc35a9badd634f9805cd398ce6d84b075ddf3b84b69ff07b0cf02cd9c12d",
	            "SandboxKey": "/var/run/docker/netns/6708dc35a9ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-541522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0caff4f103e2bb50c273486830a8e865b14f6dbe8e146654adb86f6d80472821",
	                    "EndpointID": "9e6509e611a2c3d344f94605c775fd2ab40aeb8f84ea3dbcb3e369308fcb4c2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "96:4a:5c:95:23:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-541522",
	                        "e6eb78d2b6b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522: exit status 2 (392.478443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-541522 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-541522 logs -n 25: (1.231981061s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                                                                                                  │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                                                                                               │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:16.055139  384087 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:16.055453  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055465  384087 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:16.055471  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055752  384087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:16.056433  384087 out.go:368] Setting JSON to false
	I1123 10:18:16.058300  384087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10837,"bootTime":1763882259,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:16.058361  384087 start.go:143] virtualization: kvm guest
	I1123 10:18:16.060255  384087 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:16.062154  384087 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:16.062208  384087 notify.go:221] Checking for updates...
	I1123 10:18:16.065653  384087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:16.066941  384087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:16.068519  384087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:16.069705  384087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:16.070753  384087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:16.075296  384087 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075441  384087 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075581  384087 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075700  384087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:16.103550  384087 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:16.103724  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.179700  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 10:18:16.167474698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.179880  384087 docker.go:319] overlay module found
	I1123 10:18:16.181785  384087 out.go:179] * Using the docker driver based on user configuration
	I1123 10:18:16.182797  384087 start.go:309] selected driver: docker
	I1123 10:18:16.182811  384087 start.go:927] validating driver "docker" against <nil>
	I1123 10:18:16.182821  384087 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:16.183397  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.255867  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-23 10:18:16.241912897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.256083  384087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 10:18:16.256153  384087 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 10:18:16.256493  384087 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:16.258264  384087 out.go:179] * Using Docker driver with root privileges
	I1123 10:18:16.259381  384087 cni.go:84] Creating CNI manager for ""
	I1123 10:18:16.259470  384087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:16.259481  384087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:18:16.259575  384087 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:16.260875  384087 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:16.262276  384087 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:16.263490  384087 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:16.265212  384087 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:16.265252  384087 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:16.265262  384087 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:16.265304  384087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:16.265381  384087 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:16.265397  384087 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:16.265504  384087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:16.265527  384087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json: {Name:mkb811d74a6c8dfdcb785bec927cfa094dfd91e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:16.288941  384087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:16.288968  384087 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:16.289001  384087 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:16.289049  384087 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:16.289196  384087 start.go:364] duration metric: took 122.072µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:16.289230  384087 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:16.289350  384087 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.981320388Z" level=info msg="Created container 8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb/kubernetes-dashboard" id=af48d590-131d-4165-95fc-3bd6452e2886 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.982274108Z" level=info msg="Starting container: 8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3" id=307dff47-d191-4e8f-8390-655b8ce7e6e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.9849562Z" level=info msg="Started container" PID=1711 containerID=8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb/kubernetes-dashboard id=307dff47-d191-4e8f-8390-655b8ce7e6e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fec4a88e2a77f65fb7ba4b8a818b21ed7d309b67faaff37e62f790bc56537851
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.538134481Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d5d63e48-b4be-422a-971d-601990533d2b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.539163701Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b2b54e37-fbdc-4da8-bc5d-441db54eef5d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.540292463Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=411eb49d-00f5-4e0f-96c5-85843bdc593b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.540421308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.546838634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.547363625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.573764474Z" level=info msg="Created container 1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=411eb49d-00f5-4e0f-96c5-85843bdc593b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.574347141Z" level=info msg="Starting container: 1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455" id=cce867d5-eeac-4450-9247-4cba4a2514e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.575983398Z" level=info msg="Started container" PID=1733 containerID=1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper id=cce867d5-eeac-4450-9247-4cba4a2514e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c47cb36640b442c3feeee4b1693bdb7525eda4bf359f639a8778e89578c2d71
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.670662202Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=faff32c1-dc6e-4b9d-a709-d552a340d565 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.671669556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=43055858-1cce-4048-94fe-0814900312a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.672784153Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a89c969-2b5e-4c4d-b65e-276ca770971f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.673071371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.673636927Z" level=info msg="Removing container: e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032" id=ce9dfccb-6ad0-48ee-9977-8185dc0595dc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681047489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681268023Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/31565a2c52885fc5a60ecb039ec9e07f9ac10efdc162d599e724e22ba99ce0f4/merged/etc/passwd: no such file or directory"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681296379Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/31565a2c52885fc5a60ecb039ec9e07f9ac10efdc162d599e724e22ba99ce0f4/merged/etc/group: no such file or directory"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681577059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.686063106Z" level=info msg="Removed container e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=ce9dfccb-6ad0-48ee-9977-8185dc0595dc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.73435402Z" level=info msg="Created container 0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68: kube-system/storage-provisioner/storage-provisioner" id=7a89c969-2b5e-4c4d-b65e-276ca770971f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.735029514Z" level=info msg="Starting container: 0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68" id=0ecffe64-9721-4732-95e9-ed913fbe06a1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.73695129Z" level=info msg="Started container" PID=1743 containerID=0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68 description=kube-system/storage-provisioner/storage-provisioner id=0ecffe64-9721-4732-95e9-ed913fbe06a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98324ce2ee3a9b24c3a44bfe8291d8f044a9c564d156d188fb450d1f942ebea1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b05c287971f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   98324ce2ee3a9       storage-provisioner                          kube-system
	1a6b552b31a47       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   8c47cb36640b4       dashboard-metrics-scraper-6ffb444bf9-npfkt   kubernetes-dashboard
	8910e96f3502b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   fec4a88e2a77f       kubernetes-dashboard-855c9754f9-v2hjb        kubernetes-dashboard
	22076e3d0001c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   442e1d35654ab       coredns-66bc5c9577-krmwt                     kube-system
	eccd95e49c6a3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   66c1cbcaaadf2       busybox                                      default
	c1907adeaa6a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   98324ce2ee3a9       storage-provisioner                          kube-system
	0b033ba843a9c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   121d6285d3bd3       kube-proxy-sllct                             kube-system
	552c1cc61f9b4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   0730b7f05f25d       kindnet-9vppw                                kube-system
	3638abd54c634       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   3cde3e514ab21       kube-apiserver-no-preload-541522             kube-system
	3806d3b11c0c4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   87f356ab41d7e       kube-scheduler-no-preload-541522             kube-system
	454d88050f140       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   36fffe2952036       etcd-no-preload-541522                       kube-system
	a08adaf22d6a2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   ea9620d7ae81a       kube-controller-manager-no-preload-541522    kube-system
	
	
	==> coredns [22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33696 - 15771 "HINFO IN 8244955863368741662.4325592613209093872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032073638s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-541522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-541522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-541522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-541522
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-541522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9eef6a41-5317-48ee-8389-6d173ebb4813
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-krmwt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-541522                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-9vppw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-541522              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-541522     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-sllct                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-541522              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-npfkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v2hjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node no-preload-541522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node no-preload-541522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node no-preload-541522 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node no-preload-541522 event: Registered Node no-preload-541522 in Controller
	  Normal  NodeReady                99s                kubelet          Node no-preload-541522 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node no-preload-541522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node no-preload-541522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node no-preload-541522 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-541522 event: Registered Node no-preload-541522 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e] <==
	{"level":"warn","ts":"2025-11-23T10:17:22.099417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.106639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.113183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.119623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.127196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.134372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.141167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.150263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.157803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.165731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.172625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.179276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.186915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.195498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.203310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.211390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.217808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.225250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.243369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.251190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.258382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.265668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.307442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:33.518674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.651422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-krmwt\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-23T10:17:33.518839Z","caller":"traceutil/trace.go:172","msg":"trace[1046720401] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-krmwt; range_end:; response_count:1; response_revision:580; }","duration":"180.839997ms","start":"2025-11-23T10:17:33.337978Z","end":"2025-11-23T10:17:33.518818Z","steps":["trace[1046720401] 'range keys from in-memory index tree'  (duration: 180.488221ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:18:17 up  3:00,  0 user,  load average: 4.49, 4.97, 2.99
	Linux no-preload-541522 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24] <==
	I1123 10:17:24.108655       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:24.108917       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:17:24.109150       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:24.109171       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:24.109193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:24.313329       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:24.313475       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:24.313491       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:24.313838       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:24.698334       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:24.698375       1 metrics.go:72] Registering metrics
	I1123 10:17:24.698471       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:34.313181       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:34.313229       1 main.go:301] handling current node
	I1123 10:17:44.313457       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:44.313504       1 main.go:301] handling current node
	I1123 10:17:54.314151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:54.314185       1 main.go:301] handling current node
	I1123 10:18:04.313756       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:18:04.313811       1 main.go:301] handling current node
	I1123 10:18:14.316040       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:18:14.316073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574] <==
	I1123 10:17:22.790222       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:17:22.790590       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:17:22.790682       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:22.791491       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:17:22.791698       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:17:22.793993       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 10:17:22.794068       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:17:22.795045       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:17:22.811640       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:17:22.816528       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:17:22.826345       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:17:22.826388       1 policy_source.go:240] refreshing policies
	I1123 10:17:22.828953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:22.843811       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:17:23.076717       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:17:23.109280       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:17:23.130132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:23.138146       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:23.144993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:17:23.181489       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.167.184"}
	I1123 10:17:23.192884       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.144.156"}
	I1123 10:17:23.690493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:26.544823       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:17:26.592472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:17:26.644240       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c] <==
	I1123 10:17:26.100735       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:17:26.115552       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:26.138113       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:17:26.138158       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:17:26.138234       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:17:26.138236       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:17:26.138483       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:17:26.138591       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:17:26.138502       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:17:26.138266       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:17:26.138251       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:17:26.144392       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:17:26.145928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:26.145994       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:17:26.148577       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:26.150252       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:26.152534       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:17:26.164124       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:17:26.166471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:17:26.167850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:26.172980       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:17:26.173122       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:17:26.173267       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-541522"
	I1123 10:17:26.173326       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:17:26.177788       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb] <==
	I1123 10:17:23.958838       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:24.013757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:24.114654       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:24.114689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:17:24.114792       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:24.132580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:24.132642       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:24.137517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:24.137895       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:24.137977       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:24.139569       1 config.go:309] "Starting node config controller"
	I1123 10:17:24.139635       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:24.139655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:24.139611       1 config.go:200] "Starting service config controller"
	I1123 10:17:24.139677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:24.139637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:24.139710       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:24.139566       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:24.139718       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:24.239853       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:17:24.239853       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:24.239893       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202] <==
	I1123 10:17:21.298917       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:17:22.732737       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:17:22.732856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:17:22.732891       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:17:22.732936       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:17:22.783200       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:17:22.783599       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:22.787131       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:22.787222       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:22.788502       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:22.788589       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:17:22.887820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:17:26 no-preload-541522 kubelet[714]: I1123 10:17:26.755537     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hscw\" (UniqueName: \"kubernetes.io/projected/34f09442-cd95-4537-b45f-aec277f3db4d-kube-api-access-8hscw\") pod \"dashboard-metrics-scraper-6ffb444bf9-npfkt\" (UID: \"34f09442-cd95-4537-b45f-aec277f3db4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt"
	Nov 23 10:17:26 no-preload-541522 kubelet[714]: I1123 10:17:26.755584     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svs7g\" (UniqueName: \"kubernetes.io/projected/ee7a029a-15b4-431e-9a2e-31dcbdc111bb-kube-api-access-svs7g\") pod \"kubernetes-dashboard-855c9754f9-v2hjb\" (UID: \"ee7a029a-15b4-431e-9a2e-31dcbdc111bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb"
	Nov 23 10:17:29 no-preload-541522 kubelet[714]: I1123 10:17:29.922734     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:17:30 no-preload-541522 kubelet[714]: I1123 10:17:30.601752     714 scope.go:117] "RemoveContainer" containerID="bdb96edb0ddf175e50be1b773b90974808da82e135bf0c0bfda51767461da508"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: I1123 10:17:31.607366     714 scope.go:117] "RemoveContainer" containerID="bdb96edb0ddf175e50be1b773b90974808da82e135bf0c0bfda51767461da508"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: I1123 10:17:31.607611     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: E1123 10:17:31.607816     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:32 no-preload-541522 kubelet[714]: I1123 10:17:32.612924     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:32 no-preload-541522 kubelet[714]: E1123 10:17:32.613161     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:37 no-preload-541522 kubelet[714]: I1123 10:17:37.115559     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb" podStartSLOduration=2.251279498 podStartE2EDuration="11.115535209s" podCreationTimestamp="2025-11-23 10:17:26 +0000 UTC" firstStartedPulling="2025-11-23 10:17:27.071100986 +0000 UTC m=+6.643234227" lastFinishedPulling="2025-11-23 10:17:35.935356713 +0000 UTC m=+15.507489938" observedRunningTime="2025-11-23 10:17:36.646872105 +0000 UTC m=+16.219005335" watchObservedRunningTime="2025-11-23 10:17:37.115535209 +0000 UTC m=+16.687668438"
	Nov 23 10:17:39 no-preload-541522 kubelet[714]: I1123 10:17:39.211297     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:39 no-preload-541522 kubelet[714]: E1123 10:17:39.211534     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.537599     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.670258     714 scope.go:117] "RemoveContainer" containerID="c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.672284     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.672500     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: E1123 10:17:54.672694     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:59 no-preload-541522 kubelet[714]: I1123 10:17:59.210544     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:17:59 no-preload-541522 kubelet[714]: E1123 10:17:59.210708     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:18:10 no-preload-541522 kubelet[714]: I1123 10:18:10.537332     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:18:10 no-preload-541522 kubelet[714]: E1123 10:18:10.537576     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:18:13 no-preload-541522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:13 no-preload-541522 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:13 no-preload-541522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:13 no-preload-541522 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3] <==
	2025/11/23 10:17:36 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:36 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:36 Using secret token for csrf signing
	2025/11/23 10:17:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:17:36 Generating JWE encryption key
	2025/11/23 10:17:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:36 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:36 Creating in-cluster Sidecar client
	2025/11/23 10:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:36 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:36 Starting overwatch
	
	
	==> storage-provisioner [0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68] <==
	I1123 10:17:54.750308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:17:54.758164       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:17:54.758214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:17:54.760403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:17:58.215296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:02.476474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:06.075319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:09.130138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.153341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.159349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:12.159537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:12.159665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc2f647a-dbc0-4e88-bc5d-2f4e9ba1110c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9 became leader
	I1123 10:18:12.159743       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9!
	W1123 10:18:12.161796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.165761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:12.259933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9!
	W1123 10:18:14.169016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:14.173544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.177535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.182193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47] <==
	I1123 10:17:23.923128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:17:53.928164       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541522 -n no-preload-541522: exit status 2 (356.211255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-541522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-541522
helpers_test.go:243: (dbg) docker inspect no-preload-541522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	        "Created": "2025-11-23T10:15:44.853738209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 371572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:13.048471684Z",
	            "FinishedAt": "2025-11-23T10:17:11.518342413Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/hosts",
	        "LogPath": "/var/lib/docker/containers/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba/e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba-json.log",
	        "Name": "/no-preload-541522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-541522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-541522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6eb78d2b6b76b54751cdbc6803f7c5e6c001120afa09311adefdc9e243248ba",
	                "LowerDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23785fec93f41cf14687a94fe439202e1986b9d5ecc74e3696510796f789088e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-541522",
	                "Source": "/var/lib/docker/volumes/no-preload-541522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-541522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-541522",
	                "name.minikube.sigs.k8s.io": "no-preload-541522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6708dc35a9badd634f9805cd398ce6d84b075ddf3b84b69ff07b0cf02cd9c12d",
	            "SandboxKey": "/var/run/docker/netns/6708dc35a9ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-541522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0caff4f103e2bb50c273486830a8e865b14f6dbe8e146654adb86f6d80472821",
	                    "EndpointID": "9e6509e611a2c3d344f94605c775fd2ab40aeb8f84ea3dbcb3e369308fcb4c2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "96:4a:5c:95:23:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-541522",
	                        "e6eb78d2b6b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522: exit status 2 (388.213893ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-541522 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-541522 logs -n 25: (2.355164535s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                                                                                                  │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                                                                                               │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:16.055139  384087 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:16.055453  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055465  384087 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:16.055471  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055752  384087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:16.056433  384087 out.go:368] Setting JSON to false
	I1123 10:18:16.058300  384087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10837,"bootTime":1763882259,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:16.058361  384087 start.go:143] virtualization: kvm guest
	I1123 10:18:16.060255  384087 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:16.062154  384087 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:16.062208  384087 notify.go:221] Checking for updates...
	I1123 10:18:16.065653  384087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:16.066941  384087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:16.068519  384087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:16.069705  384087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:16.070753  384087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:16.075296  384087 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075441  384087 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075581  384087 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075700  384087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:16.103550  384087 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:16.103724  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.179700  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 10:18:16.167474698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.179880  384087 docker.go:319] overlay module found
	I1123 10:18:16.181785  384087 out.go:179] * Using the docker driver based on user configuration
	I1123 10:18:16.182797  384087 start.go:309] selected driver: docker
	I1123 10:18:16.182811  384087 start.go:927] validating driver "docker" against <nil>
	I1123 10:18:16.182821  384087 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:16.183397  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.255867  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-23 10:18:16.241912897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.256083  384087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 10:18:16.256153  384087 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 10:18:16.256493  384087 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:16.258264  384087 out.go:179] * Using Docker driver with root privileges
	I1123 10:18:16.259381  384087 cni.go:84] Creating CNI manager for ""
	I1123 10:18:16.259470  384087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:16.259481  384087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:18:16.259575  384087 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:16.260875  384087 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:16.262276  384087 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:16.263490  384087 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:16.265212  384087 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:16.265252  384087 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:16.265262  384087 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:16.265304  384087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:16.265381  384087 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:16.265397  384087 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:16.265504  384087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:16.265527  384087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json: {Name:mkb811d74a6c8dfdcb785bec927cfa094dfd91e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:16.288941  384087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:16.288968  384087 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:16.289001  384087 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:16.289049  384087 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:16.289196  384087 start.go:364] duration metric: took 122.072µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:16.289230  384087 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:16.289350  384087 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.981320388Z" level=info msg="Created container 8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb/kubernetes-dashboard" id=af48d590-131d-4165-95fc-3bd6452e2886 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.982274108Z" level=info msg="Starting container: 8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3" id=307dff47-d191-4e8f-8390-655b8ce7e6e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:35 no-preload-541522 crio[565]: time="2025-11-23T10:17:35.9849562Z" level=info msg="Started container" PID=1711 containerID=8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb/kubernetes-dashboard id=307dff47-d191-4e8f-8390-655b8ce7e6e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fec4a88e2a77f65fb7ba4b8a818b21ed7d309b67faaff37e62f790bc56537851
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.538134481Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d5d63e48-b4be-422a-971d-601990533d2b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.539163701Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b2b54e37-fbdc-4da8-bc5d-441db54eef5d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.540292463Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=411eb49d-00f5-4e0f-96c5-85843bdc593b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.540421308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.546838634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.547363625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.573764474Z" level=info msg="Created container 1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=411eb49d-00f5-4e0f-96c5-85843bdc593b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.574347141Z" level=info msg="Starting container: 1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455" id=cce867d5-eeac-4450-9247-4cba4a2514e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.575983398Z" level=info msg="Started container" PID=1733 containerID=1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper id=cce867d5-eeac-4450-9247-4cba4a2514e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c47cb36640b442c3feeee4b1693bdb7525eda4bf359f639a8778e89578c2d71
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.670662202Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=faff32c1-dc6e-4b9d-a709-d552a340d565 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.671669556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=43055858-1cce-4048-94fe-0814900312a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.672784153Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a89c969-2b5e-4c4d-b65e-276ca770971f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.673071371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.673636927Z" level=info msg="Removing container: e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032" id=ce9dfccb-6ad0-48ee-9977-8185dc0595dc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681047489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681268023Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/31565a2c52885fc5a60ecb039ec9e07f9ac10efdc162d599e724e22ba99ce0f4/merged/etc/passwd: no such file or directory"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681296379Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/31565a2c52885fc5a60ecb039ec9e07f9ac10efdc162d599e724e22ba99ce0f4/merged/etc/group: no such file or directory"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.681577059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.686063106Z" level=info msg="Removed container e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt/dashboard-metrics-scraper" id=ce9dfccb-6ad0-48ee-9977-8185dc0595dc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.73435402Z" level=info msg="Created container 0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68: kube-system/storage-provisioner/storage-provisioner" id=7a89c969-2b5e-4c4d-b65e-276ca770971f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.735029514Z" level=info msg="Starting container: 0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68" id=0ecffe64-9721-4732-95e9-ed913fbe06a1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:54 no-preload-541522 crio[565]: time="2025-11-23T10:17:54.73695129Z" level=info msg="Started container" PID=1743 containerID=0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68 description=kube-system/storage-provisioner/storage-provisioner id=0ecffe64-9721-4732-95e9-ed913fbe06a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98324ce2ee3a9b24c3a44bfe8291d8f044a9c564d156d188fb450d1f942ebea1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b05c287971f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   98324ce2ee3a9       storage-provisioner                          kube-system
	1a6b552b31a47       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   8c47cb36640b4       dashboard-metrics-scraper-6ffb444bf9-npfkt   kubernetes-dashboard
	8910e96f3502b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   fec4a88e2a77f       kubernetes-dashboard-855c9754f9-v2hjb        kubernetes-dashboard
	22076e3d0001c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   442e1d35654ab       coredns-66bc5c9577-krmwt                     kube-system
	eccd95e49c6a3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   66c1cbcaaadf2       busybox                                      default
	c1907adeaa6a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   98324ce2ee3a9       storage-provisioner                          kube-system
	0b033ba843a9c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   121d6285d3bd3       kube-proxy-sllct                             kube-system
	552c1cc61f9b4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   0730b7f05f25d       kindnet-9vppw                                kube-system
	3638abd54c634       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   3cde3e514ab21       kube-apiserver-no-preload-541522             kube-system
	3806d3b11c0c4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   87f356ab41d7e       kube-scheduler-no-preload-541522             kube-system
	454d88050f140       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   36fffe2952036       etcd-no-preload-541522                       kube-system
	a08adaf22d6a2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   ea9620d7ae81a       kube-controller-manager-no-preload-541522    kube-system
	
	
	==> coredns [22076e3d0001c319bbb5b8eb5af9a218edede50a27ff2fca46a99b91e20e37c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33696 - 15771 "HINFO IN 8244955863368741662.4325592613209093872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032073638s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-541522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-541522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-541522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-541522
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:18:03 +0000   Sun, 23 Nov 2025 10:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-541522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9eef6a41-5317-48ee-8389-6d173ebb4813
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-krmwt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-541522                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-9vppw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-541522              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-541522     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-sllct                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-541522              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-npfkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v2hjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node no-preload-541522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node no-preload-541522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node no-preload-541522 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s               node-controller  Node no-preload-541522 event: Registered Node no-preload-541522 in Controller
	  Normal  NodeReady                101s               kubelet          Node no-preload-541522 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node no-preload-541522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node no-preload-541522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node no-preload-541522 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-541522 event: Registered Node no-preload-541522 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [454d88050f14061405415d3f827ed9bd0308c85f15a90182f9e2c8138c52f80e] <==
	{"level":"warn","ts":"2025-11-23T10:17:22.099417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.106639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.113183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.119623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.127196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.134372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.141167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.150263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.157803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.165731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.172625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.179276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.186915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.195498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.203310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.211390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.217808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.225250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.243369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.251190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.258382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.265668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:22.307442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:33.518674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.651422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-krmwt\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-23T10:17:33.518839Z","caller":"traceutil/trace.go:172","msg":"trace[1046720401] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-krmwt; range_end:; response_count:1; response_revision:580; }","duration":"180.839997ms","start":"2025-11-23T10:17:33.337978Z","end":"2025-11-23T10:17:33.518818Z","steps":["trace[1046720401] 'range keys from in-memory index tree'  (duration: 180.488221ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:18:19 up  3:00,  0 user,  load average: 4.49, 4.97, 2.99
	Linux no-preload-541522 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [552c1cc61f9b4b2ebc92a448d957f042ab6a8903da1181a5136796e2f5ed4c24] <==
	I1123 10:17:24.108655       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:24.108917       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:17:24.109150       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:24.109171       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:24.109193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:24.313329       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:24.313475       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:24.313491       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:24.313838       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:24.698334       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:24.698375       1 metrics.go:72] Registering metrics
	I1123 10:17:24.698471       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:34.313181       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:34.313229       1 main.go:301] handling current node
	I1123 10:17:44.313457       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:44.313504       1 main.go:301] handling current node
	I1123 10:17:54.314151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:17:54.314185       1 main.go:301] handling current node
	I1123 10:18:04.313756       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:18:04.313811       1 main.go:301] handling current node
	I1123 10:18:14.316040       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:18:14.316073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3638abd54c634ee34a952430b3c8ad3b8c78fb2c6abb24bdbdb0382ea4147574] <==
	I1123 10:17:22.790222       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:17:22.790590       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:17:22.790682       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:22.791491       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:17:22.791698       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:17:22.793993       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 10:17:22.794068       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:17:22.795045       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:17:22.811640       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:17:22.816528       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:17:22.826345       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:17:22.826388       1 policy_source.go:240] refreshing policies
	I1123 10:17:22.828953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:22.843811       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:17:23.076717       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:17:23.109280       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:17:23.130132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:23.138146       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:23.144993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:17:23.181489       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.167.184"}
	I1123 10:17:23.192884       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.144.156"}
	I1123 10:17:23.690493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:26.544823       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:17:26.592472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:17:26.644240       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a08adaf22d6a20e8d1bde7d9ffe78523a672a25236e3b7bd280fe7482c65da6c] <==
	I1123 10:17:26.100735       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:17:26.115552       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:26.138113       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:17:26.138158       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:17:26.138234       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:17:26.138236       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:17:26.138483       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:17:26.138591       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:17:26.138502       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:17:26.138266       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:17:26.138251       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:17:26.144392       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:17:26.145928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:26.145994       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:17:26.148577       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:26.150252       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:26.152534       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:17:26.164124       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:17:26.166471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:17:26.167850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:26.172980       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:17:26.173122       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:17:26.173267       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-541522"
	I1123 10:17:26.173326       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:17:26.177788       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [0b033ba843a9c8de8730dd081e3ca3cd3e9327b7d05531c1a7d30ecee4a00edb] <==
	I1123 10:17:23.958838       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:24.013757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:24.114654       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:24.114689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:17:24.114792       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:24.132580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:24.132642       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:24.137517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:24.137895       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:24.137977       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:24.139569       1 config.go:309] "Starting node config controller"
	I1123 10:17:24.139635       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:24.139655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:24.139611       1 config.go:200] "Starting service config controller"
	I1123 10:17:24.139677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:24.139637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:24.139710       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:24.139566       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:24.139718       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:24.239853       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:17:24.239853       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:24.239893       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3806d3b11c0c4af0a295b79daeec9cddc1ca76da75190a71f7234b95f181f202] <==
	I1123 10:17:21.298917       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:17:22.732737       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:17:22.732856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:17:22.732891       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:17:22.732936       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:17:22.783200       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:17:22.783599       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:22.787131       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:22.787222       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:22.788502       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:22.788589       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:17:22.887820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:17:26 no-preload-541522 kubelet[714]: I1123 10:17:26.755537     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hscw\" (UniqueName: \"kubernetes.io/projected/34f09442-cd95-4537-b45f-aec277f3db4d-kube-api-access-8hscw\") pod \"dashboard-metrics-scraper-6ffb444bf9-npfkt\" (UID: \"34f09442-cd95-4537-b45f-aec277f3db4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt"
	Nov 23 10:17:26 no-preload-541522 kubelet[714]: I1123 10:17:26.755584     714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svs7g\" (UniqueName: \"kubernetes.io/projected/ee7a029a-15b4-431e-9a2e-31dcbdc111bb-kube-api-access-svs7g\") pod \"kubernetes-dashboard-855c9754f9-v2hjb\" (UID: \"ee7a029a-15b4-431e-9a2e-31dcbdc111bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb"
	Nov 23 10:17:29 no-preload-541522 kubelet[714]: I1123 10:17:29.922734     714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:17:30 no-preload-541522 kubelet[714]: I1123 10:17:30.601752     714 scope.go:117] "RemoveContainer" containerID="bdb96edb0ddf175e50be1b773b90974808da82e135bf0c0bfda51767461da508"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: I1123 10:17:31.607366     714 scope.go:117] "RemoveContainer" containerID="bdb96edb0ddf175e50be1b773b90974808da82e135bf0c0bfda51767461da508"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: I1123 10:17:31.607611     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:31 no-preload-541522 kubelet[714]: E1123 10:17:31.607816     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:32 no-preload-541522 kubelet[714]: I1123 10:17:32.612924     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:32 no-preload-541522 kubelet[714]: E1123 10:17:32.613161     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:37 no-preload-541522 kubelet[714]: I1123 10:17:37.115559     714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2hjb" podStartSLOduration=2.251279498 podStartE2EDuration="11.115535209s" podCreationTimestamp="2025-11-23 10:17:26 +0000 UTC" firstStartedPulling="2025-11-23 10:17:27.071100986 +0000 UTC m=+6.643234227" lastFinishedPulling="2025-11-23 10:17:35.935356713 +0000 UTC m=+15.507489938" observedRunningTime="2025-11-23 10:17:36.646872105 +0000 UTC m=+16.219005335" watchObservedRunningTime="2025-11-23 10:17:37.115535209 +0000 UTC m=+16.687668438"
	Nov 23 10:17:39 no-preload-541522 kubelet[714]: I1123 10:17:39.211297     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:39 no-preload-541522 kubelet[714]: E1123 10:17:39.211534     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.537599     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.670258     714 scope.go:117] "RemoveContainer" containerID="c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.672284     714 scope.go:117] "RemoveContainer" containerID="e274afd831e13632e24ee381c6ce1b02bcd4c020bd0f56802cd7b0ccd5fac032"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: I1123 10:17:54.672500     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:17:54 no-preload-541522 kubelet[714]: E1123 10:17:54.672694     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:17:59 no-preload-541522 kubelet[714]: I1123 10:17:59.210544     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:17:59 no-preload-541522 kubelet[714]: E1123 10:17:59.210708     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:18:10 no-preload-541522 kubelet[714]: I1123 10:18:10.537332     714 scope.go:117] "RemoveContainer" containerID="1a6b552b31a47947fd2e0f3c471e54b8792bf961ce9509dffe210303b8bcb455"
	Nov 23 10:18:10 no-preload-541522 kubelet[714]: E1123 10:18:10.537576     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-npfkt_kubernetes-dashboard(34f09442-cd95-4537-b45f-aec277f3db4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-npfkt" podUID="34f09442-cd95-4537-b45f-aec277f3db4d"
	Nov 23 10:18:13 no-preload-541522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:13 no-preload-541522 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:13 no-preload-541522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:13 no-preload-541522 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [8910e96f3502b14ff942cd962a23008447c9446e693d1e751f367dee3fba3ab3] <==
	2025/11/23 10:17:36 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:36 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:36 Using secret token for csrf signing
	2025/11/23 10:17:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:17:36 Generating JWE encryption key
	2025/11/23 10:17:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:36 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:36 Creating in-cluster Sidecar client
	2025/11/23 10:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:36 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:36 Starting overwatch
	
	
	==> storage-provisioner [0b05c287971f7cba3e6f8c1b6ce364a72dac9d8cb9a1b907d3544285a4cdde68] <==
	I1123 10:17:54.750308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:17:54.758164       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:17:54.758214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:17:54.760403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:17:58.215296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:02.476474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:06.075319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:09.130138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.153341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.159349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:12.159537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:12.159665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc2f647a-dbc0-4e88-bc5d-2f4e9ba1110c", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9 became leader
	I1123 10:18:12.159743       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9!
	W1123 10:18:12.161796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.165761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:12.259933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-541522_832b45ab-c17f-47a7-b4d9-ec0fe4a24ca9!
	W1123 10:18:14.169016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:14.173544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.177535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.182193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:18.187269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:18.195434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1907adeaa6a131d9b2c1bc89e267c99d679be8842bb2f6d15b9fcc745975d47] <==
	I1123 10:17:23.923128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:17:53.928164       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541522 -n no-preload-541522
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-541522 -n no-preload-541522: exit status 2 (382.61581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-541522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-412306 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-412306 --alsologtostderr -v=1: exit status 80 (2.620993447s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-412306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:18:16.177941  384138 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:16.178270  384138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.178282  384138 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:16.178288  384138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.178595  384138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:16.178908  384138 out.go:368] Setting JSON to false
	I1123 10:18:16.178936  384138 mustload.go:66] Loading cluster: embed-certs-412306
	I1123 10:18:16.179408  384138 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.179983  384138 cli_runner.go:164] Run: docker container inspect embed-certs-412306 --format={{.State.Status}}
	I1123 10:18:16.201022  384138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:18:16.201810  384138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.277813  384138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-23 10:18:16.267419402 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.278505  384138 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-412306 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:18:16.280419  384138 out.go:179] * Pausing node embed-certs-412306 ... 
	I1123 10:18:16.281679  384138 host.go:66] Checking if "embed-certs-412306" exists ...
	I1123 10:18:16.282018  384138 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:16.282109  384138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412306
	I1123 10:18:16.303385  384138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/embed-certs-412306/id_rsa Username:docker}
	I1123 10:18:16.409040  384138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:16.424517  384138 pause.go:52] kubelet running: true
	I1123 10:18:16.424592  384138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:16.626820  384138 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:16.626927  384138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:16.707778  384138 cri.go:89] found id: "704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff"
	I1123 10:18:16.707806  384138 cri.go:89] found id: "75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b"
	I1123 10:18:16.707814  384138 cri.go:89] found id: "c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	I1123 10:18:16.707818  384138 cri.go:89] found id: "c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0"
	I1123 10:18:16.707823  384138 cri.go:89] found id: "a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7"
	I1123 10:18:16.707829  384138 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:18:16.707835  384138 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:18:16.707840  384138 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:18:16.707845  384138 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:18:16.707854  384138 cri.go:89] found id: "142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	I1123 10:18:16.707862  384138 cri.go:89] found id: "a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5"
	I1123 10:18:16.707867  384138 cri.go:89] found id: ""
	I1123 10:18:16.707910  384138 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:16.721164  384138 retry.go:31] will retry after 163.43574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:16.885458  384138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:16.900537  384138 pause.go:52] kubelet running: false
	I1123 10:18:16.900596  384138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:17.084959  384138 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:17.085052  384138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:17.167118  384138 cri.go:89] found id: "704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff"
	I1123 10:18:17.167145  384138 cri.go:89] found id: "75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b"
	I1123 10:18:17.167151  384138 cri.go:89] found id: "c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	I1123 10:18:17.167156  384138 cri.go:89] found id: "c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0"
	I1123 10:18:17.167160  384138 cri.go:89] found id: "a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7"
	I1123 10:18:17.167165  384138 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:18:17.167169  384138 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:18:17.167174  384138 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:18:17.167177  384138 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:18:17.167188  384138 cri.go:89] found id: "142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	I1123 10:18:17.167193  384138 cri.go:89] found id: "a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5"
	I1123 10:18:17.167198  384138 cri.go:89] found id: ""
	I1123 10:18:17.167274  384138 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:17.181295  384138 retry.go:31] will retry after 557.380127ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:17Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:17.739069  384138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:17.758195  384138 pause.go:52] kubelet running: false
	I1123 10:18:17.758267  384138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:17.935132  384138 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:17.935225  384138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:18.015099  384138 cri.go:89] found id: "704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff"
	I1123 10:18:18.015218  384138 cri.go:89] found id: "75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b"
	I1123 10:18:18.015224  384138 cri.go:89] found id: "c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	I1123 10:18:18.015240  384138 cri.go:89] found id: "c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0"
	I1123 10:18:18.015245  384138 cri.go:89] found id: "a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7"
	I1123 10:18:18.015250  384138 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:18:18.015254  384138 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:18:18.015258  384138 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:18:18.015262  384138 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:18:18.015271  384138 cri.go:89] found id: "142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	I1123 10:18:18.015275  384138 cri.go:89] found id: "a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5"
	I1123 10:18:18.015278  384138 cri.go:89] found id: ""
	I1123 10:18:18.015343  384138 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:18.030070  384138 retry.go:31] will retry after 370.588922ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:18Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:18.401723  384138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:18:18.416152  384138 pause.go:52] kubelet running: false
	I1123 10:18:18.416220  384138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:18:18.604287  384138 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:18:18.604385  384138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:18:18.680201  384138 cri.go:89] found id: "704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff"
	I1123 10:18:18.680230  384138 cri.go:89] found id: "75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b"
	I1123 10:18:18.680236  384138 cri.go:89] found id: "c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	I1123 10:18:18.680242  384138 cri.go:89] found id: "c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0"
	I1123 10:18:18.680246  384138 cri.go:89] found id: "a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7"
	I1123 10:18:18.680251  384138 cri.go:89] found id: "0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804"
	I1123 10:18:18.680256  384138 cri.go:89] found id: "b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5"
	I1123 10:18:18.680260  384138 cri.go:89] found id: "3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824"
	I1123 10:18:18.680264  384138 cri.go:89] found id: "e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430"
	I1123 10:18:18.680272  384138 cri.go:89] found id: "142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	I1123 10:18:18.680275  384138 cri.go:89] found id: "a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5"
	I1123 10:18:18.680278  384138 cri.go:89] found id: ""
	I1123 10:18:18.680328  384138 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:18:18.698424  384138 out.go:203] 
	W1123 10:18:18.700727  384138 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:18:18.700749  384138 out.go:285] * 
	* 
	W1123 10:18:18.706395  384138 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:18:18.707707  384138 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-412306 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-412306
helpers_test.go:243: (dbg) docker inspect embed-certs-412306:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	        "Created": "2025-11-23T10:16:14.870430409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:19.862346196Z",
	            "FinishedAt": "2025-11-23T10:17:18.911006568Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hosts",
	        "LogPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd-json.log",
	        "Name": "/embed-certs-412306",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-412306:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-412306",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	                "LowerDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-412306",
	                "Source": "/var/lib/docker/volumes/embed-certs-412306/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-412306",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-412306",
	                "name.minikube.sigs.k8s.io": "embed-certs-412306",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "51762594728b68704c84311aceb4f8acd182d074e7273baaad2816a0181ab11d",
	            "SandboxKey": "/var/run/docker/netns/51762594728b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-412306": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "80c19d1f62c6174f298a861aa9911c5900bfe0857882aac57b7c600a7d06c5aa",
	                    "EndpointID": "967af06264138e34df8b82735ea8cf22985c7cad0683abda3e2c73c355bd28c1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "02:cc:15:e9:ba:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-412306",
	                        "2363fe4602f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306: exit status 2 (374.848698ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25: (2.869328048s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-791161 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                                                                                                  │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                                                                                               │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:16.055139  384087 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:16.055453  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055465  384087 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:16.055471  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055752  384087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:16.056433  384087 out.go:368] Setting JSON to false
	I1123 10:18:16.058300  384087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10837,"bootTime":1763882259,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:16.058361  384087 start.go:143] virtualization: kvm guest
	I1123 10:18:16.060255  384087 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:16.062154  384087 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:16.062208  384087 notify.go:221] Checking for updates...
	I1123 10:18:16.065653  384087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:16.066941  384087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:16.068519  384087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:16.069705  384087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:16.070753  384087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:16.075296  384087 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075441  384087 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075581  384087 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075700  384087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:16.103550  384087 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:16.103724  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.179700  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 10:18:16.167474698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.179880  384087 docker.go:319] overlay module found
	I1123 10:18:16.181785  384087 out.go:179] * Using the docker driver based on user configuration
	I1123 10:18:16.182797  384087 start.go:309] selected driver: docker
	I1123 10:18:16.182811  384087 start.go:927] validating driver "docker" against <nil>
	I1123 10:18:16.182821  384087 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:16.183397  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.255867  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-23 10:18:16.241912897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.256083  384087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 10:18:16.256153  384087 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 10:18:16.256493  384087 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:16.258264  384087 out.go:179] * Using Docker driver with root privileges
	I1123 10:18:16.259381  384087 cni.go:84] Creating CNI manager for ""
	I1123 10:18:16.259470  384087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:16.259481  384087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:18:16.259575  384087 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:16.260875  384087 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:16.262276  384087 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:16.263490  384087 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:16.265212  384087 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:16.265252  384087 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:16.265262  384087 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:16.265304  384087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:16.265381  384087 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:16.265397  384087 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:16.265504  384087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:16.265527  384087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json: {Name:mkb811d74a6c8dfdcb785bec927cfa094dfd91e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:16.288941  384087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:16.288968  384087 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:16.289001  384087 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:16.289049  384087 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:16.289196  384087 start.go:364] duration metric: took 122.072µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:16.289230  384087 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:16.289350  384087 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.442182299Z" level=info msg="Created container a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf/kubernetes-dashboard" id=ff8c8fce-7ff1-4a6d-b6b7-9b02f3bcb6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.442732831Z" level=info msg="Starting container: a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5" id=a8fa3389-77e1-45ce-843a-589657a1fb72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.444885335Z" level=info msg="Started container" PID=1741 containerID=a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf/kubernetes-dashboard id=a8fa3389-77e1-45ce-843a-589657a1fb72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed45aaf5ac9cea1ab2e0164b6b70d03823cac925a976094dc4450af27103de63
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.444126978Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0bf90971-9817-4e7e-9529-088310d7c0e3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.445179091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a54896d7-bfdd-414c-9cb3-f7685580b72c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.446377591Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=b86bffd1-0baa-41b3-92de-75083718ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.446520725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.453435139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.453918613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.480602913Z" level=info msg="Created container 142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=b86bffd1-0baa-41b3-92de-75083718ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.481267022Z" level=info msg="Starting container: 142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f" id=190d4d7e-d370-4577-9580-e1a898672ae5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.483451555Z" level=info msg="Started container" PID=1763 containerID=142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper id=190d4d7e-d370-4577-9580-e1a898672ae5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f467f0aa942f811bd59df708d9ec322e91211378acda4c757fe022865886424
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.591595183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c411f7dc-2087-422b-9e75-a507d7b2df2a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.5926822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51533210-dfd8-44a0-8aa0-270761b0e063 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.593812052Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ad1c031f-7a82-4ff7-8c89-ffbed1bb5873 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.59395753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.595496459Z" level=info msg="Removing container: e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4" id=7faa7709-3f0a-4be4-ba65-c3128ae4523c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.600910652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.601191808Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1aee1a57acb06ca32223c1c7be90c06075b877ca4599e5ef46149c46dcd162e1/merged/etc/passwd: no such file or directory"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.601230271Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1aee1a57acb06ca32223c1c7be90c06075b877ca4599e5ef46149c46dcd162e1/merged/etc/group: no such file or directory"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.602200227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.608451629Z" level=info msg="Removed container e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=7faa7709-3f0a-4be4-ba65-c3128ae4523c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.628161045Z" level=info msg="Created container 704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff: kube-system/storage-provisioner/storage-provisioner" id=ad1c031f-7a82-4ff7-8c89-ffbed1bb5873 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.628881905Z" level=info msg="Starting container: 704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff" id=d6d76b79-3b86-49da-bcd2-2ea582024abe name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.630855725Z" level=info msg="Started container" PID=1773 containerID=704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff description=kube-system/storage-provisioner/storage-provisioner id=d6d76b79-3b86-49da-bcd2-2ea582024abe name=/runtime.v1.RuntimeService/StartContainer sandboxID=986b10779f1bd022b15135476405eca05c6c34d3eab9ec8defd4960167d9b758
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	704bba87333e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   986b10779f1bd       storage-provisioner                          kube-system
	142d4e2b0120e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   1f467f0aa942f       dashboard-metrics-scraper-6ffb444bf9-4bhjp   kubernetes-dashboard
	a13d6171b7f17       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   ed45aaf5ac9ce       kubernetes-dashboard-855c9754f9-dw5cf        kubernetes-dashboard
	848c3d202c1fa       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   99a024a1dc38e       busybox                                      default
	75724cb907ea9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   029de2444a9b1       coredns-66bc5c9577-fxl7j                     kube-system
	c50f133d26d04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   986b10779f1bd       storage-provisioner                          kube-system
	c521abd718034       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   5aeee6508d779       kindnet-sm2h2                                kube-system
	a85287776ccce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   bb95f65bddc0b       kube-proxy-2vnjq                             kube-system
	0632950c74da2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   0bd1dc40d591c       kube-controller-manager-embed-certs-412306   kube-system
	b7c384560289e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   4c690318a0937       etcd-embed-certs-412306                      kube-system
	3ce42ea391320       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   595e72b83ad4d       kube-apiserver-embed-certs-412306            kube-system
	e3ffbd81d631a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   b1ca69ef92142       kube-scheduler-embed-certs-412306            kube-system
	
	
	==> coredns [75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34606 - 58366 "HINFO IN 6125505052027134514.6204527211007466347. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035732313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-412306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-412306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-412306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-412306
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-412306
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f548ff8d-94a1-438a-a9c0-5f1765fa56bb
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-fxl7j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-412306                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-sm2h2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-412306             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-412306    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-2vnjq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-412306             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4bhjp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dw5cf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-412306 event: Registered Node embed-certs-412306 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-412306 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node embed-certs-412306 event: Registered Node embed-certs-412306 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5] <==
	{"level":"warn","ts":"2025-11-23T10:17:29.407545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.416011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.425244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.433999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.443888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.453616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.465039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.472545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.481579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.490190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.498533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.506433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.515823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.523033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.541359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.550601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.559464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.618794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39478","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:17:33.681566Z","caller":"traceutil/trace.go:172","msg":"trace[781633894] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"106.040295ms","start":"2025-11-23T10:17:33.575509Z","end":"2025-11-23T10:17:33.681550Z","steps":["trace[781633894] 'process raft request'  (duration: 106.006282ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.681629Z","caller":"traceutil/trace.go:172","msg":"trace[1102436310] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"106.093252ms","start":"2025-11-23T10:17:33.575498Z","end":"2025-11-23T10:17:33.681591Z","steps":["trace[1102436310] 'process raft request'  (duration: 105.900284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.803973Z","caller":"traceutil/trace.go:172","msg":"trace[73565206] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:542; }","duration":"122.648793ms","start":"2025-11-23T10:17:33.681304Z","end":"2025-11-23T10:17:33.803953Z","steps":["trace[73565206] 'read index received'  (duration: 122.642289ms)","trace[73565206] 'applied index is now lower than readState.Index'  (duration: 5.507µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:17:33.807271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.662467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-23T10:17:33.807357Z","caller":"traceutil/trace.go:172","msg":"trace[2008106258] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:514; }","duration":"137.775474ms","start":"2025-11-23T10:17:33.669567Z","end":"2025-11-23T10:17:33.807342Z","steps":["trace[2008106258] 'agreement among raft nodes before linearized reading'  (duration: 134.516343ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.807772Z","caller":"traceutil/trace.go:172","msg":"trace[1038072] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"232.086226ms","start":"2025-11-23T10:17:33.575671Z","end":"2025-11-23T10:17:33.807757Z","steps":["trace[1038072] 'process raft request'  (duration: 228.402919ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:18:21.206219Z","caller":"traceutil/trace.go:172","msg":"trace[147265299] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"142.715651ms","start":"2025-11-23T10:18:21.063488Z","end":"2025-11-23T10:18:21.206203Z","steps":["trace[147265299] 'process raft request'  (duration: 142.57669ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:18:21 up  3:00,  0 user,  load average: 4.49, 4.97, 2.99
	Linux embed-certs-412306 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0] <==
	I1123 10:17:30.971161       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:30.971421       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 10:17:30.971607       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:30.971630       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:30.971653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:31.272620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:31.272674       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:31.272687       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:31.272821       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:31.666751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:31.666807       1 metrics.go:72] Registering metrics
	I1123 10:17:31.666907       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:41.271610       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:17:41.271655       1 main.go:301] handling current node
	I1123 10:17:51.272047       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:17:51.272121       1 main.go:301] handling current node
	I1123 10:18:01.271872       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:01.271928       1 main.go:301] handling current node
	I1123 10:18:11.274336       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:11.274387       1 main.go:301] handling current node
	I1123 10:18:21.279180       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:21.279211       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824] <==
	I1123 10:17:30.222396       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:17:30.223939       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:30.224962       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:30.225147       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:17:30.225290       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:17:30.225513       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:17:30.225631       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:17:30.225664       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:17:30.225707       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:17:30.228624       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:17:30.228686       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1123 10:17:30.232317       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:17:30.233796       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:17:30.258911       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:17:30.466190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:17:30.579286       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:17:30.608615       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:17:30.632083       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:30.642786       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:30.694414       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.209.242"}
	I1123 10:17:30.709882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.129.60"}
	I1123 10:17:31.123545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:33.574937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:17:34.084969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:17:34.127161       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804] <==
	I1123 10:17:33.557123       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:17:33.559994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:17:33.572517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:17:33.572553       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:33.572544       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:17:33.575394       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:33.578226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:17:33.579466       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:17:33.580700       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:17:33.582941       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:17:33.584553       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:17:33.587015       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 10:17:33.589367       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:17:33.591714       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:33.592787       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:17:33.595050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:17:33.596232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:17:33.596249       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:17:33.596270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:17:33.598517       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:17:33.601755       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:17:33.601817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:33.601859       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:17:33.601936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-412306"
	I1123 10:17:33.601988       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7] <==
	I1123 10:17:30.853039       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:30.926295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:31.026854       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:31.026893       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 10:17:31.027039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:31.049031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:31.049187       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:31.055385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:31.055784       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:31.055873       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:31.057417       1 config.go:200] "Starting service config controller"
	I1123 10:17:31.057448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:31.057496       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:31.057504       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:31.057509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:31.057511       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:31.057529       1 config.go:309] "Starting node config controller"
	I1123 10:17:31.057534       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:31.057540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:31.158298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:31.158316       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:17:31.158340       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430] <==
	I1123 10:17:27.590997       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:17:30.206421       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:17:30.206460       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:30.219727       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:30.219779       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:17:30.219820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.221676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.219863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:17:30.219847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:17:30.223982       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:17:30.224041       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:17:30.322017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.324139       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 10:17:30.324150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:17:30 embed-certs-412306 kubelet[728]: I1123 10:17:30.498077     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296369     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fbc63048-24c4-4cc1-8cf1-dcacbe4ba959-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dw5cf\" (UID: \"fbc63048-24c4-4cc1-8cf1-dcacbe4ba959\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296871     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/13215de9-0ff0-4c2a-8064-7d411ad69859-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4bhjp\" (UID: \"13215de9-0ff0-4c2a-8064-7d411ad69859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296936     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bxx\" (UniqueName: \"kubernetes.io/projected/fbc63048-24c4-4cc1-8cf1-dcacbe4ba959-kube-api-access-k7bxx\") pod \"kubernetes-dashboard-855c9754f9-dw5cf\" (UID: \"fbc63048-24c4-4cc1-8cf1-dcacbe4ba959\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296978     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hz7\" (UniqueName: \"kubernetes.io/projected/13215de9-0ff0-4c2a-8064-7d411ad69859-kube-api-access-n6hz7\") pod \"dashboard-metrics-scraper-6ffb444bf9-4bhjp\" (UID: \"13215de9-0ff0-4c2a-8064-7d411ad69859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp"
	Nov 23 10:17:37 embed-certs-412306 kubelet[728]: I1123 10:17:37.525698     728 scope.go:117] "RemoveContainer" containerID="e26e58ec6a48dcd2b12fdac095723967eb0d4f8d0a0bac4df44b6bc26963f14f"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: I1123 10:17:38.530061     728 scope.go:117] "RemoveContainer" containerID="e26e58ec6a48dcd2b12fdac095723967eb0d4f8d0a0bac4df44b6bc26963f14f"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: I1123 10:17:38.530202     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: E1123 10:17:38.530445     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:17:39 embed-certs-412306 kubelet[728]: I1123 10:17:39.534807     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:39 embed-certs-412306 kubelet[728]: E1123 10:17:39.535046     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:17:41 embed-certs-412306 kubelet[728]: I1123 10:17:41.554697     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf" podStartSLOduration=0.703843681 podStartE2EDuration="7.554664882s" podCreationTimestamp="2025-11-23 10:17:34 +0000 UTC" firstStartedPulling="2025-11-23 10:17:34.554034118 +0000 UTC m=+8.258087749" lastFinishedPulling="2025-11-23 10:17:41.404855329 +0000 UTC m=+15.108908950" observedRunningTime="2025-11-23 10:17:41.553635493 +0000 UTC m=+15.257689131" watchObservedRunningTime="2025-11-23 10:17:41.554664882 +0000 UTC m=+15.258718511"
	Nov 23 10:17:47 embed-certs-412306 kubelet[728]: I1123 10:17:47.214173     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:47 embed-certs-412306 kubelet[728]: E1123 10:17:47.214375     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.443504     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.591134     728 scope.go:117] "RemoveContainer" containerID="c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.593339     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.593576     728 scope.go:117] "RemoveContainer" containerID="142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: E1123 10:18:01.593753     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:07 embed-certs-412306 kubelet[728]: I1123 10:18:07.214388     728 scope.go:117] "RemoveContainer" containerID="142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	Nov 23 10:18:07 embed-certs-412306 kubelet[728]: E1123 10:18:07.214632     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5] <==
	2025/11/23 10:17:41 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:41 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:41 Using secret token for csrf signing
	2025/11/23 10:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:17:41 Generating JWE encryption key
	2025/11/23 10:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:41 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:41 Creating in-cluster Sidecar client
	2025/11/23 10:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:41 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:41 Starting overwatch
	
	
	==> storage-provisioner [704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff] <==
	I1123 10:18:01.643784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:18:01.651354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:18:01.651399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:18:01.653810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:05.109178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:09.370166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.968663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.023041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.045617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.051671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:19.051832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:19.052170       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6461001a-51cb-46e2-995d-2cc675b065ba", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf became leader
	I1123 10:18:19.052246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf!
	W1123 10:18:19.054256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.057784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:19.152822       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf!
	W1123 10:18:21.061069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:21.207295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7] <==
	I1123 10:17:30.816818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:18:00.819130       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412306 -n embed-certs-412306: exit status 2 (366.679131ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-412306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-412306
helpers_test.go:243: (dbg) docker inspect embed-certs-412306:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	        "Created": "2025-11-23T10:16:14.870430409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:17:19.862346196Z",
	            "FinishedAt": "2025-11-23T10:17:18.911006568Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/hosts",
	        "LogPath": "/var/lib/docker/containers/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd/2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd-json.log",
	        "Name": "/embed-certs-412306",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-412306:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-412306",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2363fe4602f510ad2579d8b3b443f201366bc865187d0f0f21ea72677edf75dd",
	                "LowerDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48da241729f2aaaab120e58658600759e52c4c030fbd00be0d48925dc10c5b62/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-412306",
	                "Source": "/var/lib/docker/volumes/embed-certs-412306/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-412306",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-412306",
	                "name.minikube.sigs.k8s.io": "embed-certs-412306",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "51762594728b68704c84311aceb4f8acd182d074e7273baaad2816a0181ab11d",
	            "SandboxKey": "/var/run/docker/netns/51762594728b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-412306": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "80c19d1f62c6174f298a861aa9911c5900bfe0857882aac57b7c600a7d06c5aa",
	                    "EndpointID": "967af06264138e34df8b82735ea8cf22985c7cad0683abda3e2c73c355bd28c1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "02:cc:15:e9:ba:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-412306",
	                        "2363fe4602f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306: exit status 2 (347.018228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-412306 logs -n 25: (1.175855602s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-791161 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo containerd config dump                                                                                                                                                                                                  │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                                                                                               │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:16
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:16.055139  384087 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:16.055453  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055465  384087 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:16.055471  384087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:16.055752  384087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:16.056433  384087 out.go:368] Setting JSON to false
	I1123 10:18:16.058300  384087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10837,"bootTime":1763882259,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:16.058361  384087 start.go:143] virtualization: kvm guest
	I1123 10:18:16.060255  384087 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:16.062154  384087 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:16.062208  384087 notify.go:221] Checking for updates...
	I1123 10:18:16.065653  384087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:16.066941  384087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:16.068519  384087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:16.069705  384087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:16.070753  384087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:16.075296  384087 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075441  384087 config.go:182] Loaded profile config "embed-certs-412306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075581  384087 config.go:182] Loaded profile config "no-preload-541522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:16.075700  384087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:16.103550  384087 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:16.103724  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.179700  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 10:18:16.167474698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.179880  384087 docker.go:319] overlay module found
	I1123 10:18:16.181785  384087 out.go:179] * Using the docker driver based on user configuration
	I1123 10:18:16.182797  384087 start.go:309] selected driver: docker
	I1123 10:18:16.182811  384087 start.go:927] validating driver "docker" against <nil>
	I1123 10:18:16.182821  384087 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:16.183397  384087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:16.255867  384087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-23 10:18:16.241912897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:16.256083  384087 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 10:18:16.256153  384087 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 10:18:16.256493  384087 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:16.258264  384087 out.go:179] * Using Docker driver with root privileges
	I1123 10:18:16.259381  384087 cni.go:84] Creating CNI manager for ""
	I1123 10:18:16.259470  384087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:16.259481  384087 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:18:16.259575  384087 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:16.260875  384087 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:16.262276  384087 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:16.263490  384087 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:16.265212  384087 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:16.265252  384087 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:16.265262  384087 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:16.265304  384087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:16.265381  384087 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:16.265397  384087 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:16.265504  384087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:16.265527  384087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json: {Name:mkb811d74a6c8dfdcb785bec927cfa094dfd91e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:16.288941  384087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:16.288968  384087 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:16.289001  384087 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:16.289049  384087 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:16.289196  384087 start.go:364] duration metric: took 122.072µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:16.289230  384087 start.go:93] Provisioning new machine with config: &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:16.289350  384087 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:18:16.291450  384087 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:18:16.291691  384087 start.go:159] libmachine.API.Create for "newest-cni-956615" (driver="docker")
	I1123 10:18:16.291723  384087 client.go:173] LocalClient.Create starting
	I1123 10:18:16.291794  384087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem
	I1123 10:18:16.291828  384087 main.go:143] libmachine: Decoding PEM data...
	I1123 10:18:16.291855  384087 main.go:143] libmachine: Parsing certificate...
	I1123 10:18:16.291932  384087 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem
	I1123 10:18:16.291960  384087 main.go:143] libmachine: Decoding PEM data...
	I1123 10:18:16.291980  384087 main.go:143] libmachine: Parsing certificate...
	I1123 10:18:16.292388  384087 cli_runner.go:164] Run: docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:18:16.311106  384087 cli_runner.go:211] docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:18:16.311171  384087 network_create.go:284] running [docker network inspect newest-cni-956615] to gather additional debugging logs...
	I1123 10:18:16.311198  384087 cli_runner.go:164] Run: docker network inspect newest-cni-956615
	W1123 10:18:16.328916  384087 cli_runner.go:211] docker network inspect newest-cni-956615 returned with exit code 1
	I1123 10:18:16.328954  384087 network_create.go:287] error running [docker network inspect newest-cni-956615]: docker network inspect newest-cni-956615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-956615 not found
	I1123 10:18:16.328971  384087 network_create.go:289] output of [docker network inspect newest-cni-956615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-956615 not found
	
	** /stderr **
	I1123 10:18:16.329134  384087 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:16.349673  384087 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
	I1123 10:18:16.350275  384087 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461f783b5692 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:1f:63:e6:a3:d5} reservation:<nil>}
	I1123 10:18:16.351302  384087 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00c53b2b0c8c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:97:e2:97:bc:92} reservation:<nil>}
	I1123 10:18:16.352519  384087 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9d4b0}
	I1123 10:18:16.352551  384087 network_create.go:124] attempt to create docker network newest-cni-956615 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:18:16.352612  384087 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-956615 newest-cni-956615
	I1123 10:18:16.408181  384087 network_create.go:108] docker network newest-cni-956615 192.168.76.0/24 created
	I1123 10:18:16.408216  384087 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-956615" container
	I1123 10:18:16.408290  384087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:18:16.428031  384087 cli_runner.go:164] Run: docker volume create newest-cni-956615 --label name.minikube.sigs.k8s.io=newest-cni-956615 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:18:16.447795  384087 oci.go:103] Successfully created a docker volume newest-cni-956615
	I1123 10:18:16.447900  384087 cli_runner.go:164] Run: docker run --rm --name newest-cni-956615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-956615 --entrypoint /usr/bin/test -v newest-cni-956615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:18:16.877800  384087 oci.go:107] Successfully prepared a docker volume newest-cni-956615
	I1123 10:18:16.877886  384087 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:16.877901  384087 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:18:16.877996  384087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-956615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.442182299Z" level=info msg="Created container a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf/kubernetes-dashboard" id=ff8c8fce-7ff1-4a6d-b6b7-9b02f3bcb6c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.442732831Z" level=info msg="Starting container: a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5" id=a8fa3389-77e1-45ce-843a-589657a1fb72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:17:41 embed-certs-412306 crio[565]: time="2025-11-23T10:17:41.444885335Z" level=info msg="Started container" PID=1741 containerID=a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf/kubernetes-dashboard id=a8fa3389-77e1-45ce-843a-589657a1fb72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed45aaf5ac9cea1ab2e0164b6b70d03823cac925a976094dc4450af27103de63
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.444126978Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0bf90971-9817-4e7e-9529-088310d7c0e3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.445179091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a54896d7-bfdd-414c-9cb3-f7685580b72c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.446377591Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=b86bffd1-0baa-41b3-92de-75083718ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.446520725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.453435139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.453918613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.480602913Z" level=info msg="Created container 142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=b86bffd1-0baa-41b3-92de-75083718ed46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.481267022Z" level=info msg="Starting container: 142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f" id=190d4d7e-d370-4577-9580-e1a898672ae5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.483451555Z" level=info msg="Started container" PID=1763 containerID=142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper id=190d4d7e-d370-4577-9580-e1a898672ae5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f467f0aa942f811bd59df708d9ec322e91211378acda4c757fe022865886424
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.591595183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c411f7dc-2087-422b-9e75-a507d7b2df2a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.5926822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51533210-dfd8-44a0-8aa0-270761b0e063 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.593812052Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ad1c031f-7a82-4ff7-8c89-ffbed1bb5873 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.59395753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.595496459Z" level=info msg="Removing container: e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4" id=7faa7709-3f0a-4be4-ba65-c3128ae4523c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.600910652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.601191808Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1aee1a57acb06ca32223c1c7be90c06075b877ca4599e5ef46149c46dcd162e1/merged/etc/passwd: no such file or directory"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.601230271Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1aee1a57acb06ca32223c1c7be90c06075b877ca4599e5ef46149c46dcd162e1/merged/etc/group: no such file or directory"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.602200227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.608451629Z" level=info msg="Removed container e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp/dashboard-metrics-scraper" id=7faa7709-3f0a-4be4-ba65-c3128ae4523c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.628161045Z" level=info msg="Created container 704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff: kube-system/storage-provisioner/storage-provisioner" id=ad1c031f-7a82-4ff7-8c89-ffbed1bb5873 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.628881905Z" level=info msg="Starting container: 704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff" id=d6d76b79-3b86-49da-bcd2-2ea582024abe name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:01 embed-certs-412306 crio[565]: time="2025-11-23T10:18:01.630855725Z" level=info msg="Started container" PID=1773 containerID=704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff description=kube-system/storage-provisioner/storage-provisioner id=d6d76b79-3b86-49da-bcd2-2ea582024abe name=/runtime.v1.RuntimeService/StartContainer sandboxID=986b10779f1bd022b15135476405eca05c6c34d3eab9ec8defd4960167d9b758
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	704bba87333e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   986b10779f1bd       storage-provisioner                          kube-system
	142d4e2b0120e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   1f467f0aa942f       dashboard-metrics-scraper-6ffb444bf9-4bhjp   kubernetes-dashboard
	a13d6171b7f17       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   ed45aaf5ac9ce       kubernetes-dashboard-855c9754f9-dw5cf        kubernetes-dashboard
	848c3d202c1fa       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   99a024a1dc38e       busybox                                      default
	75724cb907ea9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   029de2444a9b1       coredns-66bc5c9577-fxl7j                     kube-system
	c50f133d26d04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   986b10779f1bd       storage-provisioner                          kube-system
	c521abd718034       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   5aeee6508d779       kindnet-sm2h2                                kube-system
	a85287776ccce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   bb95f65bddc0b       kube-proxy-2vnjq                             kube-system
	0632950c74da2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   0bd1dc40d591c       kube-controller-manager-embed-certs-412306   kube-system
	b7c384560289e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   4c690318a0937       etcd-embed-certs-412306                      kube-system
	3ce42ea391320       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   595e72b83ad4d       kube-apiserver-embed-certs-412306            kube-system
	e3ffbd81d631a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   b1ca69ef92142       kube-scheduler-embed-certs-412306            kube-system
	
	
	==> coredns [75724cb907ea93e8f5e1f738cd27ef6c1c393779cd23520ffec658e64d9a901b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34606 - 58366 "HINFO IN 6125505052027134514.6204527211007466347. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035732313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-412306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-412306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-412306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_16_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:16:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-412306
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:18:01 +0000   Sun, 23 Nov 2025 10:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-412306
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                f548ff8d-94a1-438a-a9c0-5f1765fa56bb
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-fxl7j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-412306                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-sm2h2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-412306             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-412306    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-2vnjq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-412306             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4bhjp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dw5cf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-412306 event: Registered Node embed-certs-412306 in Controller
	  Normal  NodeReady                97s                  kubelet          Node embed-certs-412306 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-412306 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-412306 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-412306 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-412306 event: Registered Node embed-certs-412306 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [b7c384560289e99b732f0e7897327765130672b6e7346a6340bd2a1e35372ea5] <==
	{"level":"warn","ts":"2025-11-23T10:17:29.425244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.433999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.443888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.453616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.465039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.472545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.481579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.490190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.498533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.506433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.515823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.523033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.541359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.550601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.559464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:17:29.618794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39478","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:17:33.681566Z","caller":"traceutil/trace.go:172","msg":"trace[781633894] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"106.040295ms","start":"2025-11-23T10:17:33.575509Z","end":"2025-11-23T10:17:33.681550Z","steps":["trace[781633894] 'process raft request'  (duration: 106.006282ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.681629Z","caller":"traceutil/trace.go:172","msg":"trace[1102436310] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"106.093252ms","start":"2025-11-23T10:17:33.575498Z","end":"2025-11-23T10:17:33.681591Z","steps":["trace[1102436310] 'process raft request'  (duration: 105.900284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.803973Z","caller":"traceutil/trace.go:172","msg":"trace[73565206] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:542; }","duration":"122.648793ms","start":"2025-11-23T10:17:33.681304Z","end":"2025-11-23T10:17:33.803953Z","steps":["trace[73565206] 'read index received'  (duration: 122.642289ms)","trace[73565206] 'applied index is now lower than readState.Index'  (duration: 5.507µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T10:17:33.807271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.662467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-23T10:17:33.807357Z","caller":"traceutil/trace.go:172","msg":"trace[2008106258] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:514; }","duration":"137.775474ms","start":"2025-11-23T10:17:33.669567Z","end":"2025-11-23T10:17:33.807342Z","steps":["trace[2008106258] 'agreement among raft nodes before linearized reading'  (duration: 134.516343ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:17:33.807772Z","caller":"traceutil/trace.go:172","msg":"trace[1038072] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"232.086226ms","start":"2025-11-23T10:17:33.575671Z","end":"2025-11-23T10:17:33.807757Z","steps":["trace[1038072] 'process raft request'  (duration: 228.402919ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T10:18:21.206219Z","caller":"traceutil/trace.go:172","msg":"trace[147265299] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"142.715651ms","start":"2025-11-23T10:18:21.063488Z","end":"2025-11-23T10:18:21.206203Z","steps":["trace[147265299] 'process raft request'  (duration: 142.57669ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:18:21.621060Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.047732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-23T10:18:21.621165Z","caller":"traceutil/trace.go:172","msg":"trace[69264872] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:673; }","duration":"181.163961ms","start":"2025-11-23T10:18:21.439982Z","end":"2025-11-23T10:18:21.621146Z","steps":["trace[69264872] 'range keys from in-memory index tree'  (duration: 180.882258ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:18:23 up  3:00,  0 user,  load average: 4.69, 5.00, 3.02
	Linux embed-certs-412306 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c521abd71803403723cd9adfee52f1ca392c31bc569759181fa969d175d352d0] <==
	I1123 10:17:30.971161       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:17:30.971421       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 10:17:30.971607       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:17:30.971630       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:17:30.971653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:17:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:17:31.272620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:17:31.272674       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:17:31.272687       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:17:31.272821       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:17:31.666751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:17:31.666807       1 metrics.go:72] Registering metrics
	I1123 10:17:31.666907       1 controller.go:711] "Syncing nftables rules"
	I1123 10:17:41.271610       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:17:41.271655       1 main.go:301] handling current node
	I1123 10:17:51.272047       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:17:51.272121       1 main.go:301] handling current node
	I1123 10:18:01.271872       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:01.271928       1 main.go:301] handling current node
	I1123 10:18:11.274336       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:11.274387       1 main.go:301] handling current node
	I1123 10:18:21.279180       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 10:18:21.279211       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ce42ea391320b5ee86e145a2f64c2015bb9f8236b5dfa38af9a25f2cb484824] <==
	I1123 10:17:30.222396       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:17:30.223939       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:17:30.224962       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:17:30.225147       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:17:30.225290       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:17:30.225513       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:17:30.225631       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:17:30.225664       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:17:30.225707       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:17:30.228624       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:17:30.228686       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1123 10:17:30.232317       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:17:30.233796       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:17:30.258911       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:17:30.466190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:17:30.579286       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:17:30.608615       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:17:30.632083       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:17:30.642786       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:17:30.694414       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.209.242"}
	I1123 10:17:30.709882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.129.60"}
	I1123 10:17:31.123545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:17:33.574937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:17:34.084969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:17:34.127161       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0632950c74da2eb4978b2f96c82351b0c7fc311f03cdaaff9f60fb24bdaa3804] <==
	I1123 10:17:33.557123       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:17:33.559994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:17:33.572517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:17:33.572553       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:17:33.572544       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:17:33.575394       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:17:33.578226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:17:33.579466       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:17:33.580700       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:17:33.582941       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:17:33.584553       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:17:33.587015       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 10:17:33.589367       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:17:33.591714       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:17:33.592787       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:17:33.595050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:17:33.596232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:17:33.596249       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:17:33.596270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:17:33.598517       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:17:33.601755       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:17:33.601817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:17:33.601859       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:17:33.601936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-412306"
	I1123 10:17:33.601988       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [a85287776ccce12df9499782bd76fd12f6a905bc4752aa767522a684fb205ca7] <==
	I1123 10:17:30.853039       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:17:30.926295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:17:31.026854       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:17:31.026893       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 10:17:31.027039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:17:31.049031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:17:31.049187       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:17:31.055385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:17:31.055784       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:17:31.055873       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:31.057417       1 config.go:200] "Starting service config controller"
	I1123 10:17:31.057448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:17:31.057496       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:17:31.057504       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:17:31.057509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:17:31.057511       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:17:31.057529       1 config.go:309] "Starting node config controller"
	I1123 10:17:31.057534       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:17:31.057540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:17:31.158298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:17:31.158316       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:17:31.158340       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3ffbd81d631a2d4ada1879aabcbc74e4a0a1df338a0ca8e07cf4c3ff88f9430] <==
	I1123 10:17:27.590997       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:17:30.206421       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:17:30.206460       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:17:30.219727       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:17:30.219779       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:17:30.219820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.221676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.219863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:17:30.219847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:17:30.223982       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:17:30.224041       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:17:30.322017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:17:30.324139       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 10:17:30.324150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:17:30 embed-certs-412306 kubelet[728]: I1123 10:17:30.498077     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296369     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fbc63048-24c4-4cc1-8cf1-dcacbe4ba959-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dw5cf\" (UID: \"fbc63048-24c4-4cc1-8cf1-dcacbe4ba959\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296871     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/13215de9-0ff0-4c2a-8064-7d411ad69859-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4bhjp\" (UID: \"13215de9-0ff0-4c2a-8064-7d411ad69859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296936     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bxx\" (UniqueName: \"kubernetes.io/projected/fbc63048-24c4-4cc1-8cf1-dcacbe4ba959-kube-api-access-k7bxx\") pod \"kubernetes-dashboard-855c9754f9-dw5cf\" (UID: \"fbc63048-24c4-4cc1-8cf1-dcacbe4ba959\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf"
	Nov 23 10:17:34 embed-certs-412306 kubelet[728]: I1123 10:17:34.296978     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hz7\" (UniqueName: \"kubernetes.io/projected/13215de9-0ff0-4c2a-8064-7d411ad69859-kube-api-access-n6hz7\") pod \"dashboard-metrics-scraper-6ffb444bf9-4bhjp\" (UID: \"13215de9-0ff0-4c2a-8064-7d411ad69859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp"
	Nov 23 10:17:37 embed-certs-412306 kubelet[728]: I1123 10:17:37.525698     728 scope.go:117] "RemoveContainer" containerID="e26e58ec6a48dcd2b12fdac095723967eb0d4f8d0a0bac4df44b6bc26963f14f"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: I1123 10:17:38.530061     728 scope.go:117] "RemoveContainer" containerID="e26e58ec6a48dcd2b12fdac095723967eb0d4f8d0a0bac4df44b6bc26963f14f"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: I1123 10:17:38.530202     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:38 embed-certs-412306 kubelet[728]: E1123 10:17:38.530445     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:17:39 embed-certs-412306 kubelet[728]: I1123 10:17:39.534807     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:39 embed-certs-412306 kubelet[728]: E1123 10:17:39.535046     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:17:41 embed-certs-412306 kubelet[728]: I1123 10:17:41.554697     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dw5cf" podStartSLOduration=0.703843681 podStartE2EDuration="7.554664882s" podCreationTimestamp="2025-11-23 10:17:34 +0000 UTC" firstStartedPulling="2025-11-23 10:17:34.554034118 +0000 UTC m=+8.258087749" lastFinishedPulling="2025-11-23 10:17:41.404855329 +0000 UTC m=+15.108908950" observedRunningTime="2025-11-23 10:17:41.553635493 +0000 UTC m=+15.257689131" watchObservedRunningTime="2025-11-23 10:17:41.554664882 +0000 UTC m=+15.258718511"
	Nov 23 10:17:47 embed-certs-412306 kubelet[728]: I1123 10:17:47.214173     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:17:47 embed-certs-412306 kubelet[728]: E1123 10:17:47.214375     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.443504     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.591134     728 scope.go:117] "RemoveContainer" containerID="c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.593339     728 scope.go:117] "RemoveContainer" containerID="e5bb39c4a88620dc6274844b1af0ef3f5f475f73ed096eefe30cecb7ff55fbd4"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: I1123 10:18:01.593576     728 scope.go:117] "RemoveContainer" containerID="142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	Nov 23 10:18:01 embed-certs-412306 kubelet[728]: E1123 10:18:01.593753     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:07 embed-certs-412306 kubelet[728]: I1123 10:18:07.214388     728 scope.go:117] "RemoveContainer" containerID="142d4e2b0120e34731be21c77b8c41aff72dea2ade2760d95ab388bd80fef96f"
	Nov 23 10:18:07 embed-certs-412306 kubelet[728]: E1123 10:18:07.214632     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4bhjp_kubernetes-dashboard(13215de9-0ff0-4c2a-8064-7d411ad69859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4bhjp" podUID="13215de9-0ff0-4c2a-8064-7d411ad69859"
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:18:16 embed-certs-412306 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [a13d6171b7f17830237a3cf2ae96d3362f30e2cedebf638fd57cb088e78597c5] <==
	2025/11/23 10:17:41 Using namespace: kubernetes-dashboard
	2025/11/23 10:17:41 Using in-cluster config to connect to apiserver
	2025/11/23 10:17:41 Using secret token for csrf signing
	2025/11/23 10:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:17:41 Generating JWE encryption key
	2025/11/23 10:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:17:41 Initializing JWE encryption key from synchronized object
	2025/11/23 10:17:41 Creating in-cluster Sidecar client
	2025/11/23 10:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:41 Serving insecurely on HTTP port: 9090
	2025/11/23 10:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:17:41 Starting overwatch
	
	
	==> storage-provisioner [704bba87333e873742681d9c76cf92f3fe506464ae3f386988d14477495c41ff] <==
	I1123 10:18:01.643784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:18:01.651354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:18:01.651399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:18:01.653810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:05.109178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:09.370166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:12.968663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:16.023041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.045617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.051671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:19.051832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:18:19.052170       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6461001a-51cb-46e2-995d-2cc675b065ba", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf became leader
	I1123 10:18:19.052246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf!
	W1123 10:18:19.054256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:19.057784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:18:19.152822       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-412306_fa8dd88d-b6bd-4ed8-ac0d-0fb40b81fadf!
	W1123 10:18:21.061069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:21.207295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:23.211496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:18:23.215651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c50f133d26d04101f2479db4f241a3a6ef37b6beb8a70dd8044463313b1b1ba7] <==
	I1123 10:17:30.816818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:18:00.819130       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412306 -n embed-certs-412306
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412306 -n embed-certs-412306: exit status 2 (359.749641ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-412306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (250.513386ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956615
helpers_test.go:243: (dbg) docker inspect newest-cni-956615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	        "Created": "2025-11-23T10:18:21.747900359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:18:21.790275108Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hosts",
	        "LogPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6-json.log",
	        "Name": "/newest-cni-956615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	                "LowerDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956615",
	                "Source": "/var/lib/docker/volumes/newest-cni-956615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956615",
	                "name.minikube.sigs.k8s.io": "newest-cni-956615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1270fafee631a4185fefc45987be1077d7d71e3f5ca87b01af9d645682219f1b",
	            "SandboxKey": "/var/run/docker/netns/1270fafee631",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c68f6166aad3fb7b971424217c915ea4f510b57832199566c6c4da05aa3fd0e",
	                    "EndpointID": "7714f550414428ad2cec8888306a541b636b7eefce83f2065a1deea204e4360d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6e:ae:a7:f5:f5:d1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956615",
	                        "f539d26299e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956615 logs -n 25
E1123 10:18:42.089020   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.095413   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.106756   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.128192   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.169593   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.224019   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.251550   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.413550   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:42.735074   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-791161 sudo crio config                                                                                                                                                                                                             │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p bridge-791161                                                                                                                                                                                                                              │ bridge-791161                │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ delete  │ -p disable-driver-mounts-268907                                                                                                                                                                                                               │ disable-driver-mounts-268907 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:29.026145  390057 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:29.026273  390057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:29.026283  390057 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:29.026290  390057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:29.026473  390057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:29.026941  390057 out.go:368] Setting JSON to false
	I1123 10:18:29.028163  390057 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10850,"bootTime":1763882259,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:29.028232  390057 start.go:143] virtualization: kvm guest
	I1123 10:18:29.030157  390057 out.go:179] * [default-k8s-diff-port-772252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:29.031423  390057 notify.go:221] Checking for updates...
	I1123 10:18:29.031441  390057 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:29.032703  390057 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:29.033974  390057 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:29.035071  390057 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:29.036158  390057 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:29.037169  390057 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:29.038536  390057 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:29.039065  390057 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:29.062109  390057 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:29.062257  390057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:29.118204  390057 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:29.107596824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:29.118362  390057 docker.go:319] overlay module found
	I1123 10:18:29.120121  390057 out.go:179] * Using the docker driver based on existing profile
	I1123 10:18:29.121075  390057 start.go:309] selected driver: docker
	I1123 10:18:29.121103  390057 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:29.121198  390057 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:29.121752  390057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:29.179658  390057 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:29.169864562 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:29.180021  390057 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:18:29.180057  390057 cni.go:84] Creating CNI manager for ""
	I1123 10:18:29.180145  390057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:29.180203  390057 start.go:353] cluster config:
	{Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:29.182066  390057 out.go:179] * Starting "default-k8s-diff-port-772252" primary control-plane node in "default-k8s-diff-port-772252" cluster
	I1123 10:18:29.183169  390057 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:29.184288  390057 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:29.185267  390057 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:29.185297  390057 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:29.185309  390057 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:29.185343  390057 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:29.185395  390057 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:29.185405  390057 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:29.185524  390057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/config.json ...
	I1123 10:18:29.207240  390057 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:29.207264  390057 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:29.207280  390057 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:29.207316  390057 start.go:360] acquireMachinesLock for default-k8s-diff-port-772252: {Name:mkf6e16e36e4b276878485d819f412bbd45719c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:29.207383  390057 start.go:364] duration metric: took 44.015µs to acquireMachinesLock for "default-k8s-diff-port-772252"
	I1123 10:18:29.207406  390057 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:18:29.207415  390057 fix.go:54] fixHost starting: 
	I1123 10:18:29.207631  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:29.225014  390057 fix.go:112] recreateIfNeeded on default-k8s-diff-port-772252: state=Stopped err=<nil>
	W1123 10:18:29.225053  390057 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:18:26.347045  384087 out.go:252]   - Generating certificates and keys ...
	I1123 10:18:26.347168  384087 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:18:26.347281  384087 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:18:26.691769  384087 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:18:26.814811  384087 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:18:26.892082  384087 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:18:27.364645  384087 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:18:27.553661  384087 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:18:27.553806  384087 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-956615] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:18:27.622050  384087 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:18:27.622252  384087 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-956615] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:18:27.662120  384087 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:18:27.823290  384087 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:18:28.048246  384087 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:18:28.048313  384087 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:18:28.179228  384087 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:18:28.617084  384087 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:18:28.940179  384087 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:18:29.077013  384087 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:18:29.769439  384087 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:18:29.770472  384087 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:18:29.776680  384087 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:18:29.778155  384087 out.go:252]   - Booting up control plane ...
	I1123 10:18:29.778292  384087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:18:29.778398  384087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:18:29.778851  384087 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:18:29.794607  384087 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:18:29.794765  384087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:18:29.801677  384087 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:18:29.803134  384087 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:18:29.803196  384087 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:18:29.900160  384087 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:18:29.900299  384087 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:18:30.902120  384087 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002006216s
	I1123 10:18:30.906130  384087 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:18:30.906248  384087 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:18:30.906387  384087 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:18:30.906503  384087 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:18:32.187098  384087 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.280762067s
	I1123 10:18:32.284864  384087 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.378711702s
	I1123 10:18:33.907950  384087 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001801564s
	I1123 10:18:33.920302  384087 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:18:33.930653  384087 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:18:33.940351  384087 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:18:33.940603  384087 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-956615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:18:33.949027  384087 kubeadm.go:319] [bootstrap-token] Using token: kyc54w.vfu5kfzlpnk4qz8p
	I1123 10:18:29.226729  390057 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-772252" ...
	I1123 10:18:29.226788  390057 cli_runner.go:164] Run: docker start default-k8s-diff-port-772252
	I1123 10:18:29.502972  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:29.522662  390057 kic.go:430] container "default-k8s-diff-port-772252" state is running.
	I1123 10:18:29.523081  390057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:18:29.541135  390057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/config.json ...
	I1123 10:18:29.541342  390057 machine.go:94] provisionDockerMachine start ...
	I1123 10:18:29.541407  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:29.560235  390057 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:29.560599  390057 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1123 10:18:29.560620  390057 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:18:29.561290  390057 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33958->127.0.0.1:33130: read: connection reset by peer
	I1123 10:18:32.703754  390057 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:18:32.703792  390057 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772252"
	I1123 10:18:32.703855  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:32.721954  390057 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:32.722223  390057 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1123 10:18:32.722244  390057 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772252 && echo "default-k8s-diff-port-772252" | sudo tee /etc/hostname
	I1123 10:18:32.872855  390057 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772252
	
	I1123 10:18:32.872929  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:32.890572  390057 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:32.890943  390057 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1123 10:18:32.890981  390057 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:18:33.033733  390057 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:18:33.033769  390057 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:18:33.033824  390057 ubuntu.go:190] setting up certificates
	I1123 10:18:33.033845  390057 provision.go:84] configureAuth start
	I1123 10:18:33.033923  390057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:18:33.051276  390057 provision.go:143] copyHostCerts
	I1123 10:18:33.051335  390057 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:18:33.051347  390057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:18:33.051412  390057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:18:33.051519  390057 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:18:33.051528  390057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:18:33.051554  390057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:18:33.051624  390057 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:18:33.051631  390057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:18:33.051654  390057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:18:33.051719  390057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772252 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-772252 localhost minikube]
	I1123 10:18:33.164120  390057 provision.go:177] copyRemoteCerts
	I1123 10:18:33.164208  390057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:18:33.164252  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:33.182320  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:33.285995  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:18:33.306059  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:18:33.327007  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:18:33.346862  390057 provision.go:87] duration metric: took 312.997083ms to configureAuth
	I1123 10:18:33.346892  390057 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:18:33.347084  390057 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:33.347214  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:33.368454  390057 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:33.368720  390057 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1123 10:18:33.368741  390057 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:18:33.746310  390057 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:18:33.746343  390057 machine.go:97] duration metric: took 4.204986254s to provisionDockerMachine
	I1123 10:18:33.746359  390057 start.go:293] postStartSetup for "default-k8s-diff-port-772252" (driver="docker")
	I1123 10:18:33.746374  390057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:18:33.746442  390057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:18:33.746494  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:33.766550  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:33.868846  390057 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:18:33.872453  390057 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:18:33.872478  390057 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:18:33.872489  390057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:18:33.872530  390057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:18:33.872595  390057 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:18:33.872682  390057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:18:33.880620  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:33.898071  390057 start.go:296] duration metric: took 151.694263ms for postStartSetup
	I1123 10:18:33.898176  390057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:18:33.898243  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:33.918029  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:34.021049  390057 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:18:34.025584  390057 fix.go:56] duration metric: took 4.818162031s for fixHost
	I1123 10:18:34.025609  390057 start.go:83] releasing machines lock for "default-k8s-diff-port-772252", held for 4.818214166s
	I1123 10:18:34.025676  390057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772252
	I1123 10:18:34.044204  390057 ssh_runner.go:195] Run: cat /version.json
	I1123 10:18:34.044267  390057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:18:34.044327  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:34.044339  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:34.062175  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:34.062695  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:34.213290  390057 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:34.220197  390057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:18:34.257178  390057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:18:34.262581  390057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:18:34.262644  390057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:18:34.272201  390057 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:18:34.272229  390057 start.go:496] detecting cgroup driver to use...
	I1123 10:18:34.272273  390057 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:18:34.272320  390057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:18:34.290690  390057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:18:34.305809  390057 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:18:34.305865  390057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:18:34.323876  390057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:18:34.338381  390057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:18:34.438075  390057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:18:34.528191  390057 docker.go:234] disabling docker service ...
	I1123 10:18:34.528249  390057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:18:34.543933  390057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:18:34.561822  390057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:18:34.648731  390057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:18:34.741427  390057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:18:34.754444  390057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:18:34.768623  390057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:18:34.768672  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.777446  390057 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:18:34.777511  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.786235  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.795324  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.804811  390057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:18:34.813255  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.821835  390057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.830045  390057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:34.838513  390057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:18:34.845563  390057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:18:34.852831  390057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:34.931362  390057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:18:35.070457  390057 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:18:35.070518  390057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:18:35.074696  390057 start.go:564] Will wait 60s for crictl version
	I1123 10:18:35.074753  390057 ssh_runner.go:195] Run: which crictl
	I1123 10:18:35.078480  390057 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:18:35.103966  390057 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:18:35.104064  390057 ssh_runner.go:195] Run: crio --version
	I1123 10:18:35.132326  390057 ssh_runner.go:195] Run: crio --version
	I1123 10:18:35.161783  390057 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:18:33.950483  384087 out.go:252]   - Configuring RBAC rules ...
	I1123 10:18:33.950631  384087 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:18:33.953806  384087 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:18:33.959990  384087 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:18:33.962565  384087 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:18:33.965148  384087 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:18:33.967533  384087 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:18:34.314856  384087 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:18:34.727937  384087 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:18:35.315241  384087 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:18:35.316204  384087 kubeadm.go:319] 
	I1123 10:18:35.316313  384087 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:18:35.316332  384087 kubeadm.go:319] 
	I1123 10:18:35.316459  384087 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:18:35.316475  384087 kubeadm.go:319] 
	I1123 10:18:35.316505  384087 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:18:35.316604  384087 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:18:35.316687  384087 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:18:35.316698  384087 kubeadm.go:319] 
	I1123 10:18:35.316769  384087 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:18:35.316779  384087 kubeadm.go:319] 
	I1123 10:18:35.316869  384087 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:18:35.316890  384087 kubeadm.go:319] 
	I1123 10:18:35.316951  384087 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:18:35.317070  384087 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:18:35.317193  384087 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:18:35.317217  384087 kubeadm.go:319] 
	I1123 10:18:35.317322  384087 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:18:35.317411  384087 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:18:35.317421  384087 kubeadm.go:319] 
	I1123 10:18:35.317512  384087 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kyc54w.vfu5kfzlpnk4qz8p \
	I1123 10:18:35.317655  384087 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 \
	I1123 10:18:35.317690  384087 kubeadm.go:319] 	--control-plane 
	I1123 10:18:35.317696  384087 kubeadm.go:319] 
	I1123 10:18:35.317821  384087 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:18:35.317837  384087 kubeadm.go:319] 
	I1123 10:18:35.318003  384087 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kyc54w.vfu5kfzlpnk4qz8p \
	I1123 10:18:35.318172  384087 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c948c1195c5391c3f9ab3e5d33bde8c90cae803f5228ad4b30abfe9be3be121 
	I1123 10:18:35.320793  384087 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 10:18:35.320940  384087 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:18:35.320971  384087 cni.go:84] Creating CNI manager for ""
	I1123 10:18:35.320982  384087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:35.323341  384087 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:18:35.324474  384087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:18:35.328726  384087 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:18:35.328747  384087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:18:35.341667  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:18:35.574590  384087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:18:35.574767  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:35.574909  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-956615 minikube.k8s.io/updated_at=2025_11_23T10_18_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=newest-cni-956615 minikube.k8s.io/primary=true
	I1123 10:18:35.669478  384087 ops.go:34] apiserver oom_adj: -16
	I1123 10:18:35.669484  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:35.162822  390057 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:35.180271  390057 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 10:18:35.184342  390057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:35.194188  390057 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:18:35.194341  390057 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:35.194404  390057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:35.225067  390057 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:35.225118  390057 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:18:35.225165  390057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:35.252021  390057 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:35.252056  390057 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:18:35.252066  390057 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1123 10:18:35.252210  390057 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:18:35.252305  390057 ssh_runner.go:195] Run: crio config
	I1123 10:18:35.297543  390057 cni.go:84] Creating CNI manager for ""
	I1123 10:18:35.297565  390057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:35.297581  390057 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:18:35.297601  390057 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772252 NodeName:default-k8s-diff-port-772252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:18:35.297718  390057 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:18:35.297787  390057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:18:35.306118  390057 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:18:35.306182  390057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:18:35.313878  390057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1123 10:18:35.327692  390057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:18:35.340439  390057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1123 10:18:35.354341  390057 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:18:35.358096  390057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:35.368783  390057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:35.460641  390057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:35.484648  390057 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252 for IP: 192.168.103.2
	I1123 10:18:35.484681  390057 certs.go:195] generating shared ca certs ...
	I1123 10:18:35.484702  390057 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:35.484860  390057 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:18:35.484953  390057 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:18:35.484969  390057 certs.go:257] generating profile certs ...
	I1123 10:18:35.485113  390057 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/client.key
	I1123 10:18:35.485204  390057 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key.21e800d1
	I1123 10:18:35.485264  390057 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key
	I1123 10:18:35.485388  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:18:35.485440  390057 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:18:35.485456  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:18:35.485497  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:18:35.485532  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:18:35.485568  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:18:35.485627  390057 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:35.486384  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:18:35.506828  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:18:35.527319  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:18:35.548221  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:18:35.577546  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:18:35.604506  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:18:35.626798  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:18:35.648131  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/default-k8s-diff-port-772252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:18:35.672415  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:18:35.694033  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:18:35.713185  390057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:18:35.732496  390057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:18:35.745755  390057 ssh_runner.go:195] Run: openssl version
	I1123 10:18:35.752369  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:18:35.760459  390057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:18:35.764174  390057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:18:35.764229  390057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:18:35.798256  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:18:35.806535  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:18:35.814764  390057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:18:35.818430  390057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:18:35.818484  390057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:18:35.854165  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:18:35.862380  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:18:35.871384  390057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:35.875157  390057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:35.875215  390057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:35.909051  390057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:18:35.916862  390057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:18:35.920566  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:18:35.957402  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:18:35.992977  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:18:36.040410  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:18:36.083703  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:18:36.132776  390057 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:18:36.183014  390057 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:36.183145  390057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:18:36.183203  390057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:18:36.219635  390057 cri.go:89] found id: "ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98"
	I1123 10:18:36.219663  390057 cri.go:89] found id: "7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88"
	I1123 10:18:36.219678  390057 cri.go:89] found id: "a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a"
	I1123 10:18:36.219682  390057 cri.go:89] found id: "7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29"
	I1123 10:18:36.219686  390057 cri.go:89] found id: ""
	I1123 10:18:36.219736  390057 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:18:36.232994  390057 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:36Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:36.233072  390057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:18:36.243120  390057 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:18:36.243152  390057 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:18:36.243202  390057 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:18:36.252535  390057 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:18:36.253045  390057 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-772252" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:36.253223  390057 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-772252" cluster setting kubeconfig missing "default-k8s-diff-port-772252" context setting]
	I1123 10:18:36.253581  390057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:36.255250  390057 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:18:36.263360  390057 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 10:18:36.263395  390057 kubeadm.go:602] duration metric: took 20.237092ms to restartPrimaryControlPlane
	I1123 10:18:36.263406  390057 kubeadm.go:403] duration metric: took 80.407156ms to StartCluster
	I1123 10:18:36.263425  390057 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:36.263492  390057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:36.264375  390057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:36.264626  390057 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:36.264697  390057 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:36.264810  390057 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772252"
	I1123 10:18:36.264835  390057 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772252"
	W1123 10:18:36.264844  390057 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:18:36.264838  390057 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-772252"
	I1123 10:18:36.264861  390057 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-772252"
	I1123 10:18:36.264873  390057 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	W1123 10:18:36.264874  390057 addons.go:248] addon dashboard should already be in state true
	I1123 10:18:36.264879  390057 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:36.264905  390057 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:18:36.264872  390057 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772252"
	I1123 10:18:36.264937  390057 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772252"
	I1123 10:18:36.265257  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:36.265413  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:36.265447  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:36.267649  390057 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:36.268898  390057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:36.292536  390057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:36.292591  390057 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:18:36.292909  390057 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772252"
	W1123 10:18:36.292934  390057 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:18:36.292963  390057 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:18:36.293462  390057 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:18:36.294053  390057 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:36.294072  390057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:36.294142  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:36.294970  390057 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:18:36.295976  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:18:36.295997  390057 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:18:36.296061  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:36.327756  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:36.329032  390057 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:36.329054  390057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:36.329137  390057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:18:36.332575  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:36.352880  390057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:18:36.429557  390057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:36.446074  390057 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:18:36.449517  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:18:36.449537  390057 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:18:36.453191  390057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:36.464544  390057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:36.466908  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:18:36.466932  390057 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:18:36.481768  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:18:36.481791  390057 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:18:36.497263  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:18:36.497396  390057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:18:36.512621  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:18:36.512646  390057 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:18:36.528963  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:18:36.528983  390057 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:18:36.543697  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:18:36.543817  390057 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:18:36.557649  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:18:36.557669  390057 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:18:36.570516  390057 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:36.570535  390057 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:18:36.583579  390057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:37.758907  390057 node_ready.go:49] node "default-k8s-diff-port-772252" is "Ready"
	I1123 10:18:37.758981  390057 node_ready.go:38] duration metric: took 1.312849686s for node "default-k8s-diff-port-772252" to be "Ready" ...
	I1123 10:18:37.759022  390057 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:37.759121  390057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:38.295681  390057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842455973s)
	I1123 10:18:38.295784  390057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.831213366s)
	I1123 10:18:38.295938  390057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.712324567s)
	I1123 10:18:38.295966  390057 api_server.go:72] duration metric: took 2.031308899s to wait for apiserver process to appear ...
	I1123 10:18:38.295982  390057 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:38.296005  390057 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1123 10:18:38.297751  390057 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-772252 addons enable metrics-server
	
	I1123 10:18:38.300665  390057 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:18:38.300689  390057 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:18:38.303061  390057 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:18:38.304033  390057 addons.go:530] duration metric: took 2.039346598s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:18:38.796240  390057 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1123 10:18:38.800979  390057 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:18:38.801010  390057 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:18:36.170594  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:36.670267  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:37.170301  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:37.671242  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:38.169613  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:38.670298  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:39.170334  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:39.670196  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:40.170562  384087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:18:40.239160  384087 kubeadm.go:1114] duration metric: took 4.66443392s to wait for elevateKubeSystemPrivileges
	I1123 10:18:40.239197  384087 kubeadm.go:403] duration metric: took 14.140633439s to StartCluster
	I1123 10:18:40.239230  384087 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:40.239314  384087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:40.240266  384087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:40.240503  384087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:18:40.240505  384087 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:40.240592  384087 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:40.240687  384087 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956615"
	I1123 10:18:40.240713  384087 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956615"
	I1123 10:18:40.240717  384087 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:40.240720  384087 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956615"
	I1123 10:18:40.240751  384087 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:40.240763  384087 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956615"
	I1123 10:18:40.241172  384087 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:40.241214  384087 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:40.242231  384087 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:40.243642  384087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:40.270215  384087 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956615"
	I1123 10:18:40.270273  384087 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:40.270712  384087 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:40.272589  384087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:40.273742  384087 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:40.273762  384087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:40.273819  384087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:40.312120  384087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:40.316902  384087 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:40.316974  384087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:40.317046  384087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:40.343520  384087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:40.367354  384087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:18:40.422341  384087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:40.437752  384087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:40.480651  384087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:40.622727  384087 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:18:40.624212  384087 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:40.624335  384087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:40.780054  384087 api_server.go:72] duration metric: took 539.502355ms to wait for apiserver process to appear ...
	I1123 10:18:40.780081  384087 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:40.780134  384087 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:18:40.787041  384087 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:18:40.787827  384087 api_server.go:141] control plane version: v1.34.1
	I1123 10:18:40.787853  384087 api_server.go:131] duration metric: took 7.764861ms to wait for apiserver health ...
	I1123 10:18:40.787865  384087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:18:40.787869  384087 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:18:40.788978  384087 addons.go:530] duration metric: took 548.384296ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:18:40.790510  384087 system_pods.go:59] 8 kube-system pods found
	I1123 10:18:40.790548  384087 system_pods.go:61] "coredns-66bc5c9577-f5fbv" [a2a6f660-7d27-4ea8-b5b3-af124330c296] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:18:40.790574  384087 system_pods.go:61] "etcd-newest-cni-956615" [f8a39510-5fa3-42e6-a37e-6ceb4ff74876] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:18:40.790587  384087 system_pods.go:61] "kindnet-pfcv2" [5b3ef87c-1b75-4bb7-bafc-049f36caebc5] Running
	I1123 10:18:40.790597  384087 system_pods.go:61] "kube-apiserver-newest-cni-956615" [05c7eaaf-a379-4c0e-b15e-b4fd9b251e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:18:40.790608  384087 system_pods.go:61] "kube-controller-manager-newest-cni-956615" [9a577ee2-bcae-49ed-a341-0361d8b3e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:18:40.790617  384087 system_pods.go:61] "kube-proxy-ktlnh" [ca7b0e9b-f2f8-4b3f-92d0-691144b655a6] Running
	I1123 10:18:40.790624  384087 system_pods.go:61] "kube-scheduler-newest-cni-956615" [4eb905ef-9079-49bf-97cf-87d904882001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:18:40.790631  384087 system_pods.go:61] "storage-provisioner" [3cdc36f3-a1eb-45d6-9e02-f2c0514c2888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:18:40.790642  384087 system_pods.go:74] duration metric: took 2.766403ms to wait for pod list to return data ...
	I1123 10:18:40.790654  384087 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:18:40.792679  384087 default_sa.go:45] found service account: "default"
	I1123 10:18:40.792696  384087 default_sa.go:55] duration metric: took 2.035907ms for default service account to be created ...
	I1123 10:18:40.792706  384087 kubeadm.go:587] duration metric: took 552.165184ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:40.792718  384087 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:18:40.794809  384087 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:18:40.794834  384087 node_conditions.go:123] node cpu capacity is 8
	I1123 10:18:40.794849  384087 node_conditions.go:105] duration metric: took 2.12684ms to run NodePressure ...
	I1123 10:18:40.794861  384087 start.go:242] waiting for startup goroutines ...
	I1123 10:18:41.128655  384087 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-956615" context rescaled to 1 replicas
	I1123 10:18:41.128693  384087 start.go:247] waiting for cluster config update ...
	I1123 10:18:41.128710  384087 start.go:256] writing updated cluster config ...
	I1123 10:18:41.129068  384087 ssh_runner.go:195] Run: rm -f paused
	I1123 10:18:41.180133  384087 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:18:41.181857  384087 out.go:179] * Done! kubectl is now configured to use "newest-cni-956615" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.379443837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.379650819Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b220e246-2edd-46af-b9c3-b6b4938c1d91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.383025293Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.383957213Z" level=info msg="Ran pod sandbox 7150663c63c1dd9ef6b0047c64302a8fe80b0a668a4d35489c9755ea3986e279 with infra container: kube-system/kube-proxy-ktlnh/POD" id=b220e246-2edd-46af-b9c3-b6b4938c1d91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.38430174Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=18e693f9-df7b-421d-9618-1d3e681e5dd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.38536541Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cf221b73-ff29-4905-9c9a-bcdbfaa700ed name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.386551951Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.387471478Z" level=info msg="Ran pod sandbox e8bf756f7f878b1a6dc4e4b41d14516c957c3bbe4e9e78ad3243a885d9e6b96f with infra container: kube-system/kindnet-pfcv2/POD" id=18e693f9-df7b-421d-9618-1d3e681e5dd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.387511593Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=770aa476-e3cd-4b77-a7e7-22cbcd137277 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.389530878Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de1e4314-2b30-478d-b7ef-05498d3622e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.391200875Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4c22eb1c-da56-44c8-8a67-c6e277fa716c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.393027949Z" level=info msg="Creating container: kube-system/kube-proxy-ktlnh/kube-proxy" id=0a4a1611-c815-4e4d-b34d-ac45e3ba3c13 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.393372442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.394660341Z" level=info msg="Creating container: kube-system/kindnet-pfcv2/kindnet-cni" id=c8cce426-e7b7-497d-aaf9-94cec51ac1bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.394820145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.402592048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.403268189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.403572956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.404615301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.463549623Z" level=info msg="Created container e45259f798bb66baa847c360e37f9cc15f0c9a3038516472fd61e544e16236bf: kube-system/kindnet-pfcv2/kindnet-cni" id=c8cce426-e7b7-497d-aaf9-94cec51ac1bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.465203088Z" level=info msg="Starting container: e45259f798bb66baa847c360e37f9cc15f0c9a3038516472fd61e544e16236bf" id=0fd0a0fa-4001-49aa-9a01-c226fa9b0d66 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.465718671Z" level=info msg="Created container c4a22eef0e575f7f9988a5482591092c67a3f85300c063d3a3ebd5469bfd8a83: kube-system/kube-proxy-ktlnh/kube-proxy" id=0a4a1611-c815-4e4d-b34d-ac45e3ba3c13 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.466774987Z" level=info msg="Starting container: c4a22eef0e575f7f9988a5482591092c67a3f85300c063d3a3ebd5469bfd8a83" id=8699c582-90d1-4a0c-8356-93e2f5a21c3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.468248078Z" level=info msg="Started container" PID=1574 containerID=e45259f798bb66baa847c360e37f9cc15f0c9a3038516472fd61e544e16236bf description=kube-system/kindnet-pfcv2/kindnet-cni id=0fd0a0fa-4001-49aa-9a01-c226fa9b0d66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8bf756f7f878b1a6dc4e4b41d14516c957c3bbe4e9e78ad3243a885d9e6b96f
	Nov 23 10:18:40 newest-cni-956615 crio[779]: time="2025-11-23T10:18:40.471763559Z" level=info msg="Started container" PID=1573 containerID=c4a22eef0e575f7f9988a5482591092c67a3f85300c063d3a3ebd5469bfd8a83 description=kube-system/kube-proxy-ktlnh/kube-proxy id=8699c582-90d1-4a0c-8356-93e2f5a21c3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=7150663c63c1dd9ef6b0047c64302a8fe80b0a668a4d35489c9755ea3986e279
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e45259f798bb6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   e8bf756f7f878       kindnet-pfcv2                               kube-system
	c4a22eef0e575       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   7150663c63c1d       kube-proxy-ktlnh                            kube-system
	45c769431cf46       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   d0f113597aee3       etcd-newest-cni-956615                      kube-system
	6e869b846aa04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   7fd6aea3d29ce       kube-controller-manager-newest-cni-956615   kube-system
	e1a6fcc791720       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   416a4900f0699       kube-apiserver-newest-cni-956615            kube-system
	774745938c596       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   b1121c2bc0025       kube-scheduler-newest-cni-956615            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-956615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_18_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:18:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956615
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:18:34 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:18:34 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:18:34 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:18:34 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956615
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a5d2206a-9559-4bfb-833b-e4a1b122ea26
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956615                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-pfcv2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-956615             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-956615    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-ktlnh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-956615             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-956615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-956615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-956615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-956615 event: Registered Node newest-cni-956615 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [45c769431cf4624c36ffe667f03c55eea32f7e6598a7b3bde5bc52217e85785c] <==
	{"level":"warn","ts":"2025-11-23T10:18:31.630631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.639318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.645863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.651880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.658734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.664985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.672050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.677987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.685125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.691990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.698332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.704984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.711889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.718565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.724526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.731811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.737635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.744758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.750832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.757457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.763598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.785444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.792805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.800948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:31.843355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:18:42 up  3:01,  0 user,  load average: 4.05, 4.84, 3.00
	Linux newest-cni-956615 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e45259f798bb66baa847c360e37f9cc15f0c9a3038516472fd61e544e16236bf] <==
	I1123 10:18:40.735212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:18:40.735578       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:18:40.735798       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:18:40.735820       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:18:40.735847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:18:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:18:40.939459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:18:40.939563       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:18:40.939587       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:18:40.939828       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:18:41.332072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:18:41.332134       1 metrics.go:72] Registering metrics
	I1123 10:18:41.332282       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e1a6fcc7917206a6488299b5178d241e06303d87130bbddc6e376c1f8f14031b] <==
	I1123 10:18:32.335909       1 policy_source.go:240] refreshing policies
	E1123 10:18:32.371904       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:18:32.418817       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:18:32.422226       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:32.422313       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:18:32.427776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:32.427990       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:18:32.516356       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:18:33.222002       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:18:33.225515       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:18:33.225535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:18:33.674320       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:18:33.712802       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:18:33.825734       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:18:33.832260       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:18:33.833448       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:18:33.837912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:18:34.253861       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:18:34.718876       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:18:34.727053       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:18:34.734202       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:18:39.956283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:39.960580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:40.054619       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:18:40.322547       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6e869b846aa043d80432d2e2590fe7dbf03c98ed8586941d38bcaca8f449f272] <==
	I1123 10:18:39.252339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:18:39.252355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:18:39.252364       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:18:39.252361       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:18:39.252560       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:18:39.252631       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:18:39.252668       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:18:39.252679       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:18:39.252715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:18:39.252959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:18:39.253224       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:18:39.253314       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:18:39.253366       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:18:39.253544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:18:39.255597       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:18:39.255614       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:18:39.255791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:39.259555       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:39.264246       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:18:39.264298       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:18:39.264346       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:18:39.264357       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:18:39.264366       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:18:39.269709       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-956615" podCIDRs=["10.42.0.0/24"]
	I1123 10:18:39.271603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c4a22eef0e575f7f9988a5482591092c67a3f85300c063d3a3ebd5469bfd8a83] <==
	I1123 10:18:40.530227       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:18:40.606787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:18:40.707124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:18:40.707168       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:18:40.707282       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:18:40.726804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:18:40.726882       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:18:40.732067       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:18:40.732521       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:18:40.732556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:18:40.734245       1 config.go:200] "Starting service config controller"
	I1123 10:18:40.734348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:18:40.734409       1 config.go:309] "Starting node config controller"
	I1123 10:18:40.734428       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:18:40.734351       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:18:40.734454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:18:40.734617       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:18:40.734635       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:18:40.834989       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:18:40.835033       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:18:40.835041       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:18:40.835028       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [774745938c596ac90e8631acb2c3b2c26c17c19e8066214ab0238b14e36d9a11] <==
	E1123 10:18:32.281898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:18:32.281920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:18:32.282054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:18:32.282322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:18:32.282524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:18:32.282793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:18:32.283013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:18:32.283148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:18:32.283324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:18:32.283411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:18:32.283453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:18:32.283511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:18:32.283591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:18:33.095935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:18:33.226670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:18:33.283598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:18:33.315816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:18:33.350247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:18:33.389764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:18:33.394751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:18:33.405826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:18:33.500316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:18:33.514377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:18:33.514702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 10:18:36.578196       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:18:34 newest-cni-956615 kubelet[1311]: I1123 10:18:34.861168    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11765587dc8d011bbda0d660d01232ff-kubeconfig\") pod \"kube-controller-manager-newest-cni-956615\" (UID: \"11765587dc8d011bbda0d660d01232ff\") " pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:18:34 newest-cni-956615 kubelet[1311]: I1123 10:18:34.861208    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11765587dc8d011bbda0d660d01232ff-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-956615\" (UID: \"11765587dc8d011bbda0d660d01232ff\") " pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:18:34 newest-cni-956615 kubelet[1311]: I1123 10:18:34.861251    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adc499505b868ad943eecac28571c191-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-956615\" (UID: \"adc499505b868ad943eecac28571c191\") " pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.546686    1311 apiserver.go:52] "Watching apiserver"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.559040    1311 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.590330    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.590846    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: E1123 10:18:35.600380    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956615\" already exists" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: E1123 10:18:35.601410    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956615\" already exists" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.613034    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-956615" podStartSLOduration=1.61288789 podStartE2EDuration="1.61288789s" podCreationTimestamp="2025-11-23 10:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:35.612910035 +0000 UTC m=+1.127018115" watchObservedRunningTime="2025-11-23 10:18:35.61288789 +0000 UTC m=+1.126995926"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.626773    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-956615" podStartSLOduration=1.626752583 podStartE2EDuration="1.626752583s" podCreationTimestamp="2025-11-23 10:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:35.626574577 +0000 UTC m=+1.140682659" watchObservedRunningTime="2025-11-23 10:18:35.626752583 +0000 UTC m=+1.140860640"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.645972    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-956615" podStartSLOduration=2.645947651 podStartE2EDuration="2.645947651s" podCreationTimestamp="2025-11-23 10:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:35.636699182 +0000 UTC m=+1.150807241" watchObservedRunningTime="2025-11-23 10:18:35.645947651 +0000 UTC m=+1.160055708"
	Nov 23 10:18:35 newest-cni-956615 kubelet[1311]: I1123 10:18:35.659513    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-956615" podStartSLOduration=1.659492739 podStartE2EDuration="1.659492739s" podCreationTimestamp="2025-11-23 10:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:35.646179972 +0000 UTC m=+1.160288031" watchObservedRunningTime="2025-11-23 10:18:35.659492739 +0000 UTC m=+1.173600800"
	Nov 23 10:18:39 newest-cni-956615 kubelet[1311]: I1123 10:18:39.304985    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:18:39 newest-cni-956615 kubelet[1311]: I1123 10:18:39.305854    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106642    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psc7t\" (UniqueName: \"kubernetes.io/projected/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-kube-api-access-psc7t\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106694    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-kube-proxy\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106721    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-xtables-lock\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106778    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-lib-modules\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106817    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-lib-modules\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106881    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-cni-cfg\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106933    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-xtables-lock\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.106960    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t4j\" (UniqueName: \"kubernetes.io/projected/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-kube-api-access-s2t4j\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.614163    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pfcv2" podStartSLOduration=0.614139616 podStartE2EDuration="614.139616ms" podCreationTimestamp="2025-11-23 10:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:40.614000658 +0000 UTC m=+6.128108717" watchObservedRunningTime="2025-11-23 10:18:40.614139616 +0000 UTC m=+6.128247672"
	Nov 23 10:18:40 newest-cni-956615 kubelet[1311]: I1123 10:18:40.894040    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktlnh" podStartSLOduration=0.894016873 podStartE2EDuration="894.016873ms" podCreationTimestamp="2025-11-23 10:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:18:40.626686046 +0000 UTC m=+6.140794103" watchObservedRunningTime="2025-11-23 10:18:40.894016873 +0000 UTC m=+6.408124951"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-f5fbv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner: exit status 1 (58.342985ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-f5fbv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-956615 --alsologtostderr -v=1
E1123 10:19:02.585246   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-956615 --alsologtostderr -v=1: exit status 80 (2.091397524s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-956615 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:19:02.321014  396302 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:19:02.321134  396302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:02.321143  396302 out.go:374] Setting ErrFile to fd 2...
	I1123 10:19:02.321147  396302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:02.321354  396302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:19:02.321640  396302 out.go:368] Setting JSON to false
	I1123 10:19:02.321670  396302 mustload.go:66] Loading cluster: newest-cni-956615
	I1123 10:19:02.322250  396302 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:02.322825  396302 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:19:02.341774  396302 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:19:02.342138  396302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:19:02.405182  396302 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 10:19:02.394805523 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:19:02.405855  396302 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-956615 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:19:02.407942  396302 out.go:179] * Pausing node newest-cni-956615 ... 
	I1123 10:19:02.408968  396302 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:19:02.409304  396302 ssh_runner.go:195] Run: systemctl --version
	I1123 10:19:02.409367  396302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:19:02.428437  396302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:19:02.532285  396302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:02.545837  396302 pause.go:52] kubelet running: true
	I1123 10:19:02.545927  396302 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:02.684240  396302 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:02.684376  396302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:02.756350  396302 cri.go:89] found id: "b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2"
	I1123 10:19:02.756376  396302 cri.go:89] found id: "78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189"
	I1123 10:19:02.756380  396302 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:19:02.756383  396302 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:19:02.756386  396302 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:19:02.756390  396302 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:19:02.756393  396302 cri.go:89] found id: ""
	I1123 10:19:02.756431  396302 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:02.768997  396302 retry.go:31] will retry after 242.364485ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:02Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:03.012495  396302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:03.028163  396302 pause.go:52] kubelet running: false
	I1123 10:19:03.028229  396302 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:03.151324  396302 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:03.151451  396302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:03.223510  396302 cri.go:89] found id: "b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2"
	I1123 10:19:03.223540  396302 cri.go:89] found id: "78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189"
	I1123 10:19:03.223546  396302 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:19:03.223550  396302 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:19:03.223553  396302 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:19:03.223557  396302 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:19:03.223570  396302 cri.go:89] found id: ""
	I1123 10:19:03.223612  396302 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:03.236136  396302 retry.go:31] will retry after 257.922962ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:03Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:03.494689  396302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:03.508022  396302 pause.go:52] kubelet running: false
	I1123 10:19:03.508085  396302 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:03.631744  396302 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:03.631832  396302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:03.707258  396302 cri.go:89] found id: "b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2"
	I1123 10:19:03.707285  396302 cri.go:89] found id: "78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189"
	I1123 10:19:03.707290  396302 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:19:03.707296  396302 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:19:03.707299  396302 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:19:03.707304  396302 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:19:03.707308  396302 cri.go:89] found id: ""
	I1123 10:19:03.707367  396302 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:03.722864  396302 retry.go:31] will retry after 403.824045ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:03Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:04.127531  396302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:04.141055  396302 pause.go:52] kubelet running: false
	I1123 10:19:04.141159  396302 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:04.254500  396302 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:04.254577  396302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:04.322167  396302 cri.go:89] found id: "b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2"
	I1123 10:19:04.322195  396302 cri.go:89] found id: "78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189"
	I1123 10:19:04.322199  396302 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:19:04.322203  396302 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:19:04.322206  396302 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:19:04.322210  396302 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:19:04.322213  396302 cri.go:89] found id: ""
	I1123 10:19:04.322252  396302 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:04.336632  396302 out.go:203] 
	W1123 10:19:04.337752  396302 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:19:04.337771  396302 out.go:285] * 
	* 
	W1123 10:19:04.343594  396302 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:19:04.345218  396302 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-956615 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956615
helpers_test.go:243: (dbg) docker inspect newest-cni-956615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	        "Created": "2025-11-23T10:18:21.747900359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 394520,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:18:52.152388921Z",
	            "FinishedAt": "2025-11-23T10:18:51.235949194Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hosts",
	        "LogPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6-json.log",
	        "Name": "/newest-cni-956615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	                "LowerDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956615",
	                "Source": "/var/lib/docker/volumes/newest-cni-956615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956615",
	                "name.minikube.sigs.k8s.io": "newest-cni-956615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "50448023e57c4db7dacb14b8e5d10116d0b7bba8c963cb16de94b70fdc37f632",
	            "SandboxKey": "/var/run/docker/netns/50448023e57c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c68f6166aad3fb7b971424217c915ea4f510b57832199566c6c4da05aa3fd0e",
	                    "EndpointID": "7316d759afe56c45f4ba87942a8121882ebf28eb3ce422586dc31634f5e39c76",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "42:13:82:53:71:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956615",
	                        "f539d26299e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615: exit status 2 (348.584385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956615 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p newest-cni-956615 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-956615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ newest-cni-956615 image list --format=json                                                                                                                                                                                                    │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-956615 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:51.922533  394315 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:51.922773  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922782  394315 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:51.922786  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922982  394315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:51.923464  394315 out.go:368] Setting JSON to false
	I1123 10:18:51.924704  394315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10873,"bootTime":1763882259,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:51.924759  394315 start.go:143] virtualization: kvm guest
	I1123 10:18:51.926884  394315 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:51.928337  394315 notify.go:221] Checking for updates...
	I1123 10:18:51.928373  394315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:51.929751  394315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:51.931020  394315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:51.932349  394315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:51.933744  394315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:51.935099  394315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:51.936795  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:51.937407  394315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:51.961344  394315 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:51.961523  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.019286  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.009047301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.019399  394315 docker.go:319] overlay module found
	I1123 10:18:52.021270  394315 out.go:179] * Using the docker driver based on existing profile
	I1123 10:18:52.022550  394315 start.go:309] selected driver: docker
	I1123 10:18:52.022565  394315 start.go:927] validating driver "docker" against &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.022649  394315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:52.023207  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.080543  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.070324364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.080908  394315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:52.080949  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:52.081035  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:52.081106  394315 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.083159  394315 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:52.084255  394315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:52.085479  394315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:52.086568  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:52.086596  394315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:52.086606  394315 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:52.086653  394315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:52.086679  394315 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:52.086690  394315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:52.086776  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.108195  394315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:52.108214  394315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:52.108225  394315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:52.108262  394315 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:52.108312  394315 start.go:364] duration metric: took 32.687µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:52.108328  394315 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:18:52.108334  394315 fix.go:54] fixHost starting: 
	I1123 10:18:52.108536  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.125249  394315 fix.go:112] recreateIfNeeded on newest-cni-956615: state=Stopped err=<nil>
	W1123 10:18:52.125297  394315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:18:50.342961  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:52.842822  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:52.127162  394315 out.go:252] * Restarting existing docker container for "newest-cni-956615" ...
	I1123 10:18:52.127226  394315 cli_runner.go:164] Run: docker start newest-cni-956615
	I1123 10:18:52.396853  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.415351  394315 kic.go:430] container "newest-cni-956615" state is running.
	I1123 10:18:52.415793  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:52.434420  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.434630  394315 machine.go:94] provisionDockerMachine start ...
	I1123 10:18:52.434722  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:52.453553  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:52.453858  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:52.453876  394315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:18:52.454582  394315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46492->127.0.0.1:33135: read: connection reset by peer
	I1123 10:18:55.599296  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.599336  394315 ubuntu.go:182] provisioning hostname "newest-cni-956615"
	I1123 10:18:55.599394  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.618738  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.618993  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.619012  394315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956615 && echo "newest-cni-956615" | sudo tee /etc/hostname
	I1123 10:18:55.770698  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.770811  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.788813  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.789027  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.789043  394315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:18:55.932742  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:18:55.932777  394315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:18:55.932804  394315 ubuntu.go:190] setting up certificates
	I1123 10:18:55.932828  394315 provision.go:84] configureAuth start
	I1123 10:18:55.932895  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:55.950646  394315 provision.go:143] copyHostCerts
	I1123 10:18:55.950720  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:18:55.950739  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:18:55.950807  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:18:55.950927  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:18:55.950935  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:18:55.950963  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:18:55.951043  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:18:55.951050  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:18:55.951084  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:18:55.951181  394315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956615 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956615]
	I1123 10:18:55.985638  394315 provision.go:177] copyRemoteCerts
	I1123 10:18:55.985691  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:18:55.985729  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.003060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.105036  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:18:56.122557  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:18:56.139483  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:18:56.157358  394315 provision.go:87] duration metric: took 224.510848ms to configureAuth
	I1123 10:18:56.157392  394315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:18:56.157621  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:56.157753  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.175573  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:56.175795  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:56.175812  394315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:18:56.475612  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:18:56.475643  394315 machine.go:97] duration metric: took 4.040999325s to provisionDockerMachine
	I1123 10:18:56.475663  394315 start.go:293] postStartSetup for "newest-cni-956615" (driver="docker")
	I1123 10:18:56.475674  394315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:18:56.475746  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:18:56.475803  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.493158  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.593217  394315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:18:56.596801  394315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:18:56.596832  394315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:18:56.596844  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:18:56.596895  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:18:56.596983  394315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:18:56.597076  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:18:56.604613  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:56.621377  394315 start.go:296] duration metric: took 145.698257ms for postStartSetup
	I1123 10:18:56.621453  394315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:18:56.621507  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.639509  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.736903  394315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:18:56.741519  394315 fix.go:56] duration metric: took 4.633176884s for fixHost
	I1123 10:18:56.741547  394315 start.go:83] releasing machines lock for "newest-cni-956615", held for 4.633224185s
	I1123 10:18:56.741639  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:56.759242  394315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:18:56.759292  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.759313  394315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:18:56.759380  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.777311  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.778060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.925608  394315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:56.933469  394315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:18:56.968444  394315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:18:56.973374  394315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:18:56.973443  394315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:18:56.981566  394315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:18:56.981589  394315 start.go:496] detecting cgroup driver to use...
	I1123 10:18:56.981627  394315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:18:56.981686  394315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:18:56.995837  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:18:57.008368  394315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:18:57.008418  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:18:57.023133  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:18:57.035490  394315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:18:57.115630  394315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:18:57.196692  394315 docker.go:234] disabling docker service ...
	I1123 10:18:57.196779  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:18:57.212027  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:18:57.224568  394315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:18:57.304246  394315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:18:57.383429  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:18:57.395933  394315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:18:57.410060  394315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:18:57.410151  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.419364  394315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:18:57.419416  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.428434  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.437359  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.446280  394315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:18:57.454724  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.463785  394315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.472508  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.481248  394315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:18:57.488803  394315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:18:57.496308  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:57.573983  394315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:18:57.718163  394315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:18:57.718238  394315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:18:57.722219  394315 start.go:564] Will wait 60s for crictl version
	I1123 10:18:57.722278  394315 ssh_runner.go:195] Run: which crictl
	I1123 10:18:57.726031  394315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:18:57.751027  394315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:18:57.751130  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.778633  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.806895  394315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:18:57.807958  394315 cli_runner.go:164] Run: docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:57.825213  394315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:18:57.829406  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:57.841175  394315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 10:18:57.842167  394315 kubeadm.go:884] updating cluster {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:18:57.842312  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:57.842362  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.874472  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.874497  394315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:18:57.874557  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.899498  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.899520  394315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:18:57.899529  394315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:18:57.899664  394315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:18:57.899753  394315 ssh_runner.go:195] Run: crio config
	I1123 10:18:57.945307  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:57.945334  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:57.945353  394315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:18:57.945385  394315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956615 NodeName:newest-cni-956615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:18:57.945529  394315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:18:57.945603  394315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:18:57.954040  394315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:18:57.954111  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:18:57.962312  394315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:18:57.974790  394315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:18:57.987293  394315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 10:18:57.999467  394315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:18:58.003369  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:58.012965  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.094317  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:58.124328  394315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615 for IP: 192.168.76.2
	I1123 10:18:58.124349  394315 certs.go:195] generating shared ca certs ...
	I1123 10:18:58.124370  394315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.124522  394315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:18:58.124600  394315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:18:58.124620  394315 certs.go:257] generating profile certs ...
	I1123 10:18:58.124722  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/client.key
	I1123 10:18:58.124804  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key.27a853cb
	I1123 10:18:58.124856  394315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key
	I1123 10:18:58.124994  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:18:58.125036  394315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:18:58.125052  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:18:58.125113  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:18:58.125156  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:18:58.125191  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:18:58.125250  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:58.125897  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:18:58.144169  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:18:58.162839  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:18:58.181511  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:18:58.206364  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:18:58.224546  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:18:58.241212  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:18:58.257774  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:18:58.274527  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:18:58.291570  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:18:58.309143  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:18:58.327593  394315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:18:58.340335  394315 ssh_runner.go:195] Run: openssl version
	I1123 10:18:58.346917  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:18:58.355590  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359305  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359346  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.394024  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:18:58.402117  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:18:58.410347  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.413983  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.414033  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.447559  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:18:58.455430  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:18:58.463887  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467518  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467569  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.502214  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:18:58.510610  394315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:18:58.514564  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:18:58.548572  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:18:58.582475  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:18:58.617633  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:18:58.663551  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:18:58.706433  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:18:58.755355  394315 kubeadm.go:401] StartCluster: {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:58.755458  394315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:18:58.755534  394315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:18:58.792290  394315 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:18:58.792321  394315 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:18:58.792327  394315 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:18:58.792332  394315 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:18:58.792336  394315 cri.go:89] found id: ""
	I1123 10:18:58.792387  394315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:18:58.806842  394315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:58.806912  394315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:18:58.815260  394315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:18:58.815280  394315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:18:58.815325  394315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:18:58.822691  394315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:18:58.823363  394315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956615" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.823632  394315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956615" cluster setting kubeconfig missing "newest-cni-956615" context setting]
	I1123 10:18:58.824148  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.825412  394315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:18:58.833345  394315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:18:58.833374  394315 kubeadm.go:602] duration metric: took 18.088164ms to restartPrimaryControlPlane
	I1123 10:18:58.833384  394315 kubeadm.go:403] duration metric: took 78.041992ms to StartCluster
	I1123 10:18:58.833401  394315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.833464  394315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.834283  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.834490  394315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:58.834556  394315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:58.834673  394315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956615"
	I1123 10:18:58.834693  394315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956615"
	W1123 10:18:58.834705  394315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:18:58.834716  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:58.834736  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.834733  394315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956615"
	I1123 10:18:58.834767  394315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956615"
	I1123 10:18:58.834749  394315 addons.go:70] Setting dashboard=true in profile "newest-cni-956615"
	I1123 10:18:58.834803  394315 addons.go:239] Setting addon dashboard=true in "newest-cni-956615"
	W1123 10:18:58.834825  394315 addons.go:248] addon dashboard should already be in state true
	I1123 10:18:58.834866  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.835064  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835255  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835473  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.838196  394315 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:58.839321  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.862172  394315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956615"
	W1123 10:18:58.862197  394315 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:18:58.862226  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.862714  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.863432  394315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:58.863504  394315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:18:58.864523  394315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:58.864548  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:58.864608  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.865756  394315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 10:18:55.341834  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:57.342845  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:58.866799  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:18:58.866823  394315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:18:58.866899  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.896558  394315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:58.896587  394315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:58.896649  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.902289  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.906565  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.921464  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:59.001978  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:59.018766  394315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:59.018846  394315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:59.020845  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:18:59.020869  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:18:59.027142  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:59.034971  394315 api_server.go:72] duration metric: took 200.448073ms to wait for apiserver process to appear ...
	I1123 10:18:59.035003  394315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:59.035026  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:18:59.037406  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:18:59.037477  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:18:59.039283  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:59.053902  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:18:59.053928  394315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:18:59.070168  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:18:59.070193  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:18:59.086290  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:18:59.086317  394315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:18:59.103619  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:18:59.103647  394315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:18:59.116917  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:18:59.116941  394315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:18:59.129744  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:18:59.129770  394315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:18:59.142130  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:59.142153  394315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:18:59.154836  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:19:00.310954  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.310988  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.311005  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.343552  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.343631  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.535410  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.541409  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:00.541448  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:00.873844  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.846667586s)
	I1123 10:19:00.873914  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834599768s)
	I1123 10:19:00.874012  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.71914196s)
	I1123 10:19:00.875636  394315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956615 addons enable metrics-server
	
	I1123 10:19:00.885104  394315 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:19:00.886167  394315 addons.go:530] duration metric: took 2.051621498s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:19:01.035140  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.039364  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:01.039396  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:01.535682  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.540794  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:19:01.541893  394315 api_server.go:141] control plane version: v1.34.1
	I1123 10:19:01.541921  394315 api_server.go:131] duration metric: took 2.506910717s to wait for apiserver health ...
	I1123 10:19:01.541930  394315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:19:01.545766  394315 system_pods.go:59] 8 kube-system pods found
	I1123 10:19:01.545807  394315 system_pods.go:61] "coredns-66bc5c9577-f5fbv" [a2a6f660-7d27-4ea8-b5b3-af124330c296] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545816  394315 system_pods.go:61] "etcd-newest-cni-956615" [f8a39510-5fa3-42e6-a37e-6ceb4ff74876] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:19:01.545824  394315 system_pods.go:61] "kindnet-pfcv2" [5b3ef87c-1b75-4bb7-bafc-049f36caebc5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:19:01.545831  394315 system_pods.go:61] "kube-apiserver-newest-cni-956615" [05c7eaaf-a379-4c0e-b15e-b4fd9b251e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:19:01.545842  394315 system_pods.go:61] "kube-controller-manager-newest-cni-956615" [9a577ee2-bcae-49ed-a341-0361d8b3e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:19:01.545848  394315 system_pods.go:61] "kube-proxy-ktlnh" [ca7b0e9b-f2f8-4b3f-92d0-691144b655a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:19:01.545856  394315 system_pods.go:61] "kube-scheduler-newest-cni-956615" [4eb905ef-9079-49bf-97cf-87d904882001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:19:01.545861  394315 system_pods.go:61] "storage-provisioner" [3cdc36f3-a1eb-45d6-9e02-f2c0514c2888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545870  394315 system_pods.go:74] duration metric: took 3.934068ms to wait for pod list to return data ...
	I1123 10:19:01.545877  394315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:19:01.548629  394315 default_sa.go:45] found service account: "default"
	I1123 10:19:01.548653  394315 default_sa.go:55] duration metric: took 2.766657ms for default service account to be created ...
	I1123 10:19:01.548665  394315 kubeadm.go:587] duration metric: took 2.714149617s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:19:01.548682  394315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:19:01.551434  394315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:19:01.551456  394315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:19:01.551477  394315 node_conditions.go:105] duration metric: took 2.79002ms to run NodePressure ...
	I1123 10:19:01.551492  394315 start.go:242] waiting for startup goroutines ...
	I1123 10:19:01.551505  394315 start.go:247] waiting for cluster config update ...
	I1123 10:19:01.551523  394315 start.go:256] writing updated cluster config ...
	I1123 10:19:01.551766  394315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:01.602233  394315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:01.604242  394315 out.go:179] * Done! kubectl is now configured to use "newest-cni-956615" cluster and "default" namespace by default
	W1123 10:18:59.842677  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:02.343352  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.494541865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.499759464Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=08638d8a-c840-47e4-8aff-a75c579db976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.500171184Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8f135dce-8d8a-422b-96e8-dc193ca28925 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.501492811Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50211159Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.502145793Z" level=info msg="Ran pod sandbox 671e983c6fc90fde0774b2df84cdc05fa81a94801a1ee34622d5ad545aea16bd with infra container: kube-system/kindnet-pfcv2/POD" id=08638d8a-c840-47e4-8aff-a75c579db976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.502895953Z" level=info msg="Ran pod sandbox 2f537ce5071d7ca9de345c82e2f0fb9a89653e3eee549c2d28151f41244b821f with infra container: kube-system/kube-proxy-ktlnh/POD" id=8f135dce-8d8a-422b-96e8-dc193ca28925 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.503453116Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3b95cb25-0641-43a3-807c-9c73eb2ad19c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.503836279Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3bd9fa11-3f01-488c-9e78-6113b90c4380 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50444829Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=94b6ff7f-1c51-4710-9971-3eba646f87f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.504719628Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=aad314ad-bf8f-4953-a880-ef08b4a44cab name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506278187Z" level=info msg="Creating container: kube-system/kube-proxy-ktlnh/kube-proxy" id=013cad83-7d07-4f7e-9de9-242f4eefaa17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506393572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50646503Z" level=info msg="Creating container: kube-system/kindnet-pfcv2/kindnet-cni" id=b9bc77de-6b3c-4342-9e97-2811cd9aefec name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506557583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.511007546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.511630654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.51174802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.512295677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.547146682Z" level=info msg="Created container b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2: kube-system/kindnet-pfcv2/kindnet-cni" id=b9bc77de-6b3c-4342-9e97-2811cd9aefec name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.54788301Z" level=info msg="Starting container: b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2" id=726ddcc5-63db-4dd5-b818-9ee70daa2e72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.549985226Z" level=info msg="Started container" PID=1051 containerID=b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2 description=kube-system/kindnet-pfcv2/kindnet-cni id=726ddcc5-63db-4dd5-b818-9ee70daa2e72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=671e983c6fc90fde0774b2df84cdc05fa81a94801a1ee34622d5ad545aea16bd
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.553430747Z" level=info msg="Created container 78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189: kube-system/kube-proxy-ktlnh/kube-proxy" id=013cad83-7d07-4f7e-9de9-242f4eefaa17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.554122649Z" level=info msg="Starting container: 78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189" id=14137c3b-be2d-4878-9c6f-b7586d9d0a53 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.556730737Z" level=info msg="Started container" PID=1052 containerID=78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189 description=kube-system/kube-proxy-ktlnh/kube-proxy id=14137c3b-be2d-4878-9c6f-b7586d9d0a53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f537ce5071d7ca9de345c82e2f0fb9a89653e3eee549c2d28151f41244b821f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b05323c6c0ab1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   671e983c6fc90       kindnet-pfcv2                               kube-system
	78484bdafb835       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   2f537ce5071d7       kube-proxy-ktlnh                            kube-system
	568d4e2e13f79       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   9883c90ab3982       kube-scheduler-newest-cni-956615            kube-system
	cc3d50e3b18ae       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   9782d216f2d59       kube-controller-manager-newest-cni-956615   kube-system
	ab7965c57730d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   3c615633e20d4       etcd-newest-cni-956615                      kube-system
	3e6bea1c70004       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   fdc95a1d7fc54       kube-apiserver-newest-cni-956615            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-956615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_18_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:18:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956615
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:19:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956615
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a5d2206a-9559-4bfb-833b-e4a1b122ea26
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956615                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-pfcv2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-956615             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-956615    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-ktlnh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-956615             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 24s              kube-proxy       
	  Normal  Starting                 3s               kube-proxy       
	  Normal  Starting                 31s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s              kubelet          Node newest-cni-956615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s              kubelet          Node newest-cni-956615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s              kubelet          Node newest-cni-956615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s              node-controller  Node newest-cni-956615 event: Registered Node newest-cni-956615 in Controller
	  Normal  Starting                 7s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)  kubelet          Node newest-cni-956615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)  kubelet          Node newest-cni-956615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 7s)  kubelet          Node newest-cni-956615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-956615 event: Registered Node newest-cni-956615 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0] <==
	{"level":"warn","ts":"2025-11-23T10:18:59.647121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.656162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.662607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.672377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.675611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.682421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.689558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.696555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.704456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.713240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.720712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.727879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.734787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.741246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.748731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.755902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.763454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.770016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.777204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.784126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.791341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.812563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.819749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.828957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.882662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:05 up  3:01,  0 user,  load average: 3.86, 4.75, 3.01
	Linux newest-cni-956615 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2] <==
	I1123 10:19:01.821981       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:19:01.822344       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:19:01.822465       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:19:01.822479       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:19:01.822501       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:19:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:19:01.934491       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:19:01.934547       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:19:01.934560       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:19:02.021879       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:19:02.391299       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:19:02.391322       1 metrics.go:72] Registering metrics
	I1123 10:19:02.391398       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403] <==
	I1123 10:19:00.417033       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:19:00.417622       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:19:00.417671       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:19:00.417680       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:19:00.417700       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:19:00.417707       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:19:00.427588       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:19:00.447509       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:19:00.455755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:19:00.455788       1 policy_source.go:240] refreshing policies
	I1123 10:19:00.456310       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:19:00.498407       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:19:00.498438       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:19:00.679255       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:19:00.707416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:19:00.724218       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:19:00.731313       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:19:00.737543       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:19:00.766682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.155.50"}
	I1123 10:19:00.776221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.61.6"}
	I1123 10:19:01.303197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:19:03.089995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:19:03.488566       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:19:03.488566       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:19:03.589554       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f] <==
	I1123 10:19:03.083438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:19:03.085716       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:19:03.085746       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:19:03.085789       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:19:03.085803       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:19:03.086083       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:19:03.086440       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:19:03.086547       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:19:03.086812       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:19:03.087812       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:19:03.090020       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:19:03.092300       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:19:03.092312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:19:03.092474       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:19:03.092568       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:19:03.092585       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:19:03.092594       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:19:03.095321       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:19:03.095339       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:19:03.095347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:19:03.097682       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:19:03.102969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:19:03.104177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:19:03.105268       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:19:03.107565       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189] <==
	I1123 10:19:01.590458       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:19:01.660909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:19:01.761961       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:19:01.762008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:19:01.762154       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:19:01.786038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:19:01.786126       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:19:01.791325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:19:01.791627       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:19:01.791661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:19:01.792897       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:19:01.792918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:19:01.792955       1 config.go:200] "Starting service config controller"
	I1123 10:19:01.792968       1 config.go:309] "Starting node config controller"
	I1123 10:19:01.792982       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:19:01.792988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:19:01.792990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:19:01.792997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:19:01.792964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:19:01.893989       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:19:01.894120       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:19:01.894246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458] <==
	I1123 10:18:59.115184       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:19:00.325617       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:19:00.327456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:19:00.327485       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:19:00.327497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:19:00.373437       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:19:00.377158       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:19:00.380122       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:19:00.380158       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:19:00.386881       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:19:00.387130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:19:00.389425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:19:00.398987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 10:19:01.481164       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.230326     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956615\" not found" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.391490     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.527801     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956615\" already exists" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.527851     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530247     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530359     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530404     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.531338     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.535602     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-956615\" already exists" pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.535636     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.542984     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-956615\" already exists" pod="kube-system/kube-scheduler-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.543021     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.549949     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956615\" already exists" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.186156     677 apiserver.go:52] "Watching apiserver"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.191180     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.230340     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: E1123 10:19:01.235637     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956615\" already exists" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244360     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-lib-modules\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244441     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-cni-cfg\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244616     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-lib-modules\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244772     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-xtables-lock\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244806     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-xtables-lock\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956615 -n newest-cni-956615: exit status 2 (335.599347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2: exit status 1 (63.229776ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-f5fbv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tzd8g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-z66k2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-956615
helpers_test.go:243: (dbg) docker inspect newest-cni-956615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	        "Created": "2025-11-23T10:18:21.747900359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 394520,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:18:52.152388921Z",
	            "FinishedAt": "2025-11-23T10:18:51.235949194Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/hosts",
	        "LogPath": "/var/lib/docker/containers/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6/f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6-json.log",
	        "Name": "/newest-cni-956615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-956615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-956615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f539d26299e024705c8ed4977ba95cc7d68e6aef83e923825c2eb03f8e10fec6",
	                "LowerDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e2770a52b215d78ec65c81478f7d140e2c3671758e4e1ba86ee1fa9b246e021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-956615",
	                "Source": "/var/lib/docker/volumes/newest-cni-956615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-956615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-956615",
	                "name.minikube.sigs.k8s.io": "newest-cni-956615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "50448023e57c4db7dacb14b8e5d10116d0b7bba8c963cb16de94b70fdc37f632",
	            "SandboxKey": "/var/run/docker/netns/50448023e57c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-956615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c68f6166aad3fb7b971424217c915ea4f510b57832199566c6c4da05aa3fd0e",
	                    "EndpointID": "7316d759afe56c45f4ba87942a8121882ebf28eb3ce422586dc31634f5e39c76",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "42:13:82:53:71:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-956615",
	                        "f539d26299e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615: exit status 2 (336.149971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956615 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:17 UTC │
	│ start   │ -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:17 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ old-k8s-version-990757 image list --format=json                                                                                                                                                                                               │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p old-k8s-version-990757 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p newest-cni-956615 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-956615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ newest-cni-956615 image list --format=json                                                                                                                                                                                                    │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-956615 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:51.922533  394315 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:51.922773  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922782  394315 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:51.922786  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922982  394315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:51.923464  394315 out.go:368] Setting JSON to false
	I1123 10:18:51.924704  394315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10873,"bootTime":1763882259,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:51.924759  394315 start.go:143] virtualization: kvm guest
	I1123 10:18:51.926884  394315 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:51.928337  394315 notify.go:221] Checking for updates...
	I1123 10:18:51.928373  394315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:51.929751  394315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:51.931020  394315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:51.932349  394315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:51.933744  394315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:51.935099  394315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:51.936795  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:51.937407  394315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:51.961344  394315 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:51.961523  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.019286  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.009047301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.019399  394315 docker.go:319] overlay module found
	I1123 10:18:52.021270  394315 out.go:179] * Using the docker driver based on existing profile
	I1123 10:18:52.022550  394315 start.go:309] selected driver: docker
	I1123 10:18:52.022565  394315 start.go:927] validating driver "docker" against &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.022649  394315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:52.023207  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.080543  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.070324364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.080908  394315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:52.080949  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:52.081035  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:52.081106  394315 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.083159  394315 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:52.084255  394315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:52.085479  394315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:52.086568  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:52.086596  394315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:52.086606  394315 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:52.086653  394315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:52.086679  394315 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:52.086690  394315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:52.086776  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.108195  394315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:52.108214  394315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:52.108225  394315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:52.108262  394315 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:52.108312  394315 start.go:364] duration metric: took 32.687µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:52.108328  394315 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:18:52.108334  394315 fix.go:54] fixHost starting: 
	I1123 10:18:52.108536  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.125249  394315 fix.go:112] recreateIfNeeded on newest-cni-956615: state=Stopped err=<nil>
	W1123 10:18:52.125297  394315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:18:50.342961  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:52.842822  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:52.127162  394315 out.go:252] * Restarting existing docker container for "newest-cni-956615" ...
	I1123 10:18:52.127226  394315 cli_runner.go:164] Run: docker start newest-cni-956615
	I1123 10:18:52.396853  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.415351  394315 kic.go:430] container "newest-cni-956615" state is running.
	I1123 10:18:52.415793  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:52.434420  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.434630  394315 machine.go:94] provisionDockerMachine start ...
	I1123 10:18:52.434722  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:52.453553  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:52.453858  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:52.453876  394315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:18:52.454582  394315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46492->127.0.0.1:33135: read: connection reset by peer
	I1123 10:18:55.599296  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.599336  394315 ubuntu.go:182] provisioning hostname "newest-cni-956615"
	I1123 10:18:55.599394  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.618738  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.618993  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.619012  394315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956615 && echo "newest-cni-956615" | sudo tee /etc/hostname
	I1123 10:18:55.770698  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.770811  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.788813  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.789027  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.789043  394315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:18:55.932742  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:18:55.932777  394315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:18:55.932804  394315 ubuntu.go:190] setting up certificates
	I1123 10:18:55.932828  394315 provision.go:84] configureAuth start
	I1123 10:18:55.932895  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:55.950646  394315 provision.go:143] copyHostCerts
	I1123 10:18:55.950720  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:18:55.950739  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:18:55.950807  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:18:55.950927  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:18:55.950935  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:18:55.950963  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:18:55.951043  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:18:55.951050  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:18:55.951084  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:18:55.951181  394315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956615 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956615]
	I1123 10:18:55.985638  394315 provision.go:177] copyRemoteCerts
	I1123 10:18:55.985691  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:18:55.985729  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.003060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.105036  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:18:56.122557  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:18:56.139483  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:18:56.157358  394315 provision.go:87] duration metric: took 224.510848ms to configureAuth
	I1123 10:18:56.157392  394315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:18:56.157621  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:56.157753  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.175573  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:56.175795  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:56.175812  394315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:18:56.475612  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:18:56.475643  394315 machine.go:97] duration metric: took 4.040999325s to provisionDockerMachine
	I1123 10:18:56.475663  394315 start.go:293] postStartSetup for "newest-cni-956615" (driver="docker")
	I1123 10:18:56.475674  394315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:18:56.475746  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:18:56.475803  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.493158  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.593217  394315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:18:56.596801  394315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:18:56.596832  394315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:18:56.596844  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:18:56.596895  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:18:56.596983  394315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:18:56.597076  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:18:56.604613  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:56.621377  394315 start.go:296] duration metric: took 145.698257ms for postStartSetup
	I1123 10:18:56.621453  394315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:18:56.621507  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.639509  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.736903  394315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:18:56.741519  394315 fix.go:56] duration metric: took 4.633176884s for fixHost
	I1123 10:18:56.741547  394315 start.go:83] releasing machines lock for "newest-cni-956615", held for 4.633224185s
	I1123 10:18:56.741639  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:56.759242  394315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:18:56.759292  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.759313  394315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:18:56.759380  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.777311  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.778060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.925608  394315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:56.933469  394315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:18:56.968444  394315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:18:56.973374  394315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:18:56.973443  394315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:18:56.981566  394315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:18:56.981589  394315 start.go:496] detecting cgroup driver to use...
	I1123 10:18:56.981627  394315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:18:56.981686  394315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:18:56.995837  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:18:57.008368  394315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:18:57.008418  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:18:57.023133  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:18:57.035490  394315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:18:57.115630  394315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:18:57.196692  394315 docker.go:234] disabling docker service ...
	I1123 10:18:57.196779  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:18:57.212027  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:18:57.224568  394315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:18:57.304246  394315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:18:57.383429  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:18:57.395933  394315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:18:57.410060  394315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:18:57.410151  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.419364  394315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:18:57.419416  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.428434  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.437359  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.446280  394315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:18:57.454724  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.463785  394315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.472508  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.481248  394315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:18:57.488803  394315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:18:57.496308  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:57.573983  394315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:18:57.718163  394315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:18:57.718238  394315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:18:57.722219  394315 start.go:564] Will wait 60s for crictl version
	I1123 10:18:57.722278  394315 ssh_runner.go:195] Run: which crictl
	I1123 10:18:57.726031  394315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:18:57.751027  394315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:18:57.751130  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.778633  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.806895  394315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:18:57.807958  394315 cli_runner.go:164] Run: docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:57.825213  394315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:18:57.829406  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:57.841175  394315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 10:18:57.842167  394315 kubeadm.go:884] updating cluster {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:18:57.842312  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:57.842362  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.874472  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.874497  394315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:18:57.874557  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.899498  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.899520  394315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:18:57.899529  394315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:18:57.899664  394315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:18:57.899753  394315 ssh_runner.go:195] Run: crio config
	I1123 10:18:57.945307  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:57.945334  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:57.945353  394315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:18:57.945385  394315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956615 NodeName:newest-cni-956615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:18:57.945529  394315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:18:57.945603  394315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:18:57.954040  394315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:18:57.954111  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:18:57.962312  394315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:18:57.974790  394315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:18:57.987293  394315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 10:18:57.999467  394315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:18:58.003369  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:58.012965  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.094317  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:58.124328  394315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615 for IP: 192.168.76.2
	I1123 10:18:58.124349  394315 certs.go:195] generating shared ca certs ...
	I1123 10:18:58.124370  394315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.124522  394315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:18:58.124600  394315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:18:58.124620  394315 certs.go:257] generating profile certs ...
	I1123 10:18:58.124722  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/client.key
	I1123 10:18:58.124804  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key.27a853cb
	I1123 10:18:58.124856  394315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key
	I1123 10:18:58.124994  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:18:58.125036  394315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:18:58.125052  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:18:58.125113  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:18:58.125156  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:18:58.125191  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:18:58.125250  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:58.125897  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:18:58.144169  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:18:58.162839  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:18:58.181511  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:18:58.206364  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:18:58.224546  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:18:58.241212  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:18:58.257774  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:18:58.274527  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:18:58.291570  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:18:58.309143  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:18:58.327593  394315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:18:58.340335  394315 ssh_runner.go:195] Run: openssl version
	I1123 10:18:58.346917  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:18:58.355590  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359305  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359346  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.394024  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:18:58.402117  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:18:58.410347  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.413983  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.414033  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.447559  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:18:58.455430  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:18:58.463887  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467518  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467569  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.502214  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:18:58.510610  394315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:18:58.514564  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:18:58.548572  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:18:58.582475  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:18:58.617633  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:18:58.663551  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:18:58.706433  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:18:58.755355  394315 kubeadm.go:401] StartCluster: {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:58.755458  394315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:18:58.755534  394315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:18:58.792290  394315 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:18:58.792321  394315 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:18:58.792327  394315 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:18:58.792332  394315 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:18:58.792336  394315 cri.go:89] found id: ""
	I1123 10:18:58.792387  394315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:18:58.806842  394315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:58.806912  394315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:18:58.815260  394315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:18:58.815280  394315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:18:58.815325  394315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:18:58.822691  394315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:18:58.823363  394315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956615" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.823632  394315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956615" cluster setting kubeconfig missing "newest-cni-956615" context setting]
	I1123 10:18:58.824148  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.825412  394315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:18:58.833345  394315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:18:58.833374  394315 kubeadm.go:602] duration metric: took 18.088164ms to restartPrimaryControlPlane
	I1123 10:18:58.833384  394315 kubeadm.go:403] duration metric: took 78.041992ms to StartCluster
	I1123 10:18:58.833401  394315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.833464  394315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.834283  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.834490  394315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:58.834556  394315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:58.834673  394315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956615"
	I1123 10:18:58.834693  394315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956615"
	W1123 10:18:58.834705  394315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:18:58.834716  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:58.834736  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.834733  394315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956615"
	I1123 10:18:58.834767  394315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956615"
	I1123 10:18:58.834749  394315 addons.go:70] Setting dashboard=true in profile "newest-cni-956615"
	I1123 10:18:58.834803  394315 addons.go:239] Setting addon dashboard=true in "newest-cni-956615"
	W1123 10:18:58.834825  394315 addons.go:248] addon dashboard should already be in state true
	I1123 10:18:58.834866  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.835064  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835255  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835473  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.838196  394315 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:58.839321  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.862172  394315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956615"
	W1123 10:18:58.862197  394315 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:18:58.862226  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.862714  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.863432  394315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:58.863504  394315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:18:58.864523  394315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:58.864548  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:58.864608  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.865756  394315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 10:18:55.341834  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:57.342845  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:58.866799  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:18:58.866823  394315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:18:58.866899  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.896558  394315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:58.896587  394315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:58.896649  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.902289  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.906565  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.921464  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:59.001978  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:59.018766  394315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:59.018846  394315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:59.020845  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:18:59.020869  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:18:59.027142  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:59.034971  394315 api_server.go:72] duration metric: took 200.448073ms to wait for apiserver process to appear ...
	I1123 10:18:59.035003  394315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:59.035026  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:18:59.037406  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:18:59.037477  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:18:59.039283  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:59.053902  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:18:59.053928  394315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:18:59.070168  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:18:59.070193  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:18:59.086290  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:18:59.086317  394315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:18:59.103619  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:18:59.103647  394315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:18:59.116917  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:18:59.116941  394315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:18:59.129744  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:18:59.129770  394315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:18:59.142130  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:59.142153  394315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:18:59.154836  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:19:00.310954  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.310988  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.311005  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.343552  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.343631  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.535410  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.541409  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:00.541448  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:00.873844  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.846667586s)
	I1123 10:19:00.873914  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834599768s)
	I1123 10:19:00.874012  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.71914196s)
	I1123 10:19:00.875636  394315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956615 addons enable metrics-server
	
	I1123 10:19:00.885104  394315 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:19:00.886167  394315 addons.go:530] duration metric: took 2.051621498s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:19:01.035140  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.039364  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:01.039396  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:01.535682  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.540794  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:19:01.541893  394315 api_server.go:141] control plane version: v1.34.1
	I1123 10:19:01.541921  394315 api_server.go:131] duration metric: took 2.506910717s to wait for apiserver health ...
	I1123 10:19:01.541930  394315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:19:01.545766  394315 system_pods.go:59] 8 kube-system pods found
	I1123 10:19:01.545807  394315 system_pods.go:61] "coredns-66bc5c9577-f5fbv" [a2a6f660-7d27-4ea8-b5b3-af124330c296] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545816  394315 system_pods.go:61] "etcd-newest-cni-956615" [f8a39510-5fa3-42e6-a37e-6ceb4ff74876] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:19:01.545824  394315 system_pods.go:61] "kindnet-pfcv2" [5b3ef87c-1b75-4bb7-bafc-049f36caebc5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:19:01.545831  394315 system_pods.go:61] "kube-apiserver-newest-cni-956615" [05c7eaaf-a379-4c0e-b15e-b4fd9b251e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:19:01.545842  394315 system_pods.go:61] "kube-controller-manager-newest-cni-956615" [9a577ee2-bcae-49ed-a341-0361d8b3e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:19:01.545848  394315 system_pods.go:61] "kube-proxy-ktlnh" [ca7b0e9b-f2f8-4b3f-92d0-691144b655a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:19:01.545856  394315 system_pods.go:61] "kube-scheduler-newest-cni-956615" [4eb905ef-9079-49bf-97cf-87d904882001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:19:01.545861  394315 system_pods.go:61] "storage-provisioner" [3cdc36f3-a1eb-45d6-9e02-f2c0514c2888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545870  394315 system_pods.go:74] duration metric: took 3.934068ms to wait for pod list to return data ...
	I1123 10:19:01.545877  394315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:19:01.548629  394315 default_sa.go:45] found service account: "default"
	I1123 10:19:01.548653  394315 default_sa.go:55] duration metric: took 2.766657ms for default service account to be created ...
	I1123 10:19:01.548665  394315 kubeadm.go:587] duration metric: took 2.714149617s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:19:01.548682  394315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:19:01.551434  394315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:19:01.551456  394315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:19:01.551477  394315 node_conditions.go:105] duration metric: took 2.79002ms to run NodePressure ...
	I1123 10:19:01.551492  394315 start.go:242] waiting for startup goroutines ...
	I1123 10:19:01.551505  394315 start.go:247] waiting for cluster config update ...
	I1123 10:19:01.551523  394315 start.go:256] writing updated cluster config ...
	I1123 10:19:01.551766  394315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:01.602233  394315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:01.604242  394315 out.go:179] * Done! kubectl is now configured to use "newest-cni-956615" cluster and "default" namespace by default
	W1123 10:18:59.842677  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:02.343352  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.494541865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.499759464Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=08638d8a-c840-47e4-8aff-a75c579db976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.500171184Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8f135dce-8d8a-422b-96e8-dc193ca28925 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.501492811Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50211159Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.502145793Z" level=info msg="Ran pod sandbox 671e983c6fc90fde0774b2df84cdc05fa81a94801a1ee34622d5ad545aea16bd with infra container: kube-system/kindnet-pfcv2/POD" id=08638d8a-c840-47e4-8aff-a75c579db976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.502895953Z" level=info msg="Ran pod sandbox 2f537ce5071d7ca9de345c82e2f0fb9a89653e3eee549c2d28151f41244b821f with infra container: kube-system/kube-proxy-ktlnh/POD" id=8f135dce-8d8a-422b-96e8-dc193ca28925 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.503453116Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3b95cb25-0641-43a3-807c-9c73eb2ad19c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.503836279Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3bd9fa11-3f01-488c-9e78-6113b90c4380 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50444829Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=94b6ff7f-1c51-4710-9971-3eba646f87f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.504719628Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=aad314ad-bf8f-4953-a880-ef08b4a44cab name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506278187Z" level=info msg="Creating container: kube-system/kube-proxy-ktlnh/kube-proxy" id=013cad83-7d07-4f7e-9de9-242f4eefaa17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506393572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.50646503Z" level=info msg="Creating container: kube-system/kindnet-pfcv2/kindnet-cni" id=b9bc77de-6b3c-4342-9e97-2811cd9aefec name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.506557583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.511007546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.511630654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.51174802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.512295677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.547146682Z" level=info msg="Created container b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2: kube-system/kindnet-pfcv2/kindnet-cni" id=b9bc77de-6b3c-4342-9e97-2811cd9aefec name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.54788301Z" level=info msg="Starting container: b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2" id=726ddcc5-63db-4dd5-b818-9ee70daa2e72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.549985226Z" level=info msg="Started container" PID=1051 containerID=b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2 description=kube-system/kindnet-pfcv2/kindnet-cni id=726ddcc5-63db-4dd5-b818-9ee70daa2e72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=671e983c6fc90fde0774b2df84cdc05fa81a94801a1ee34622d5ad545aea16bd
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.553430747Z" level=info msg="Created container 78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189: kube-system/kube-proxy-ktlnh/kube-proxy" id=013cad83-7d07-4f7e-9de9-242f4eefaa17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.554122649Z" level=info msg="Starting container: 78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189" id=14137c3b-be2d-4878-9c6f-b7586d9d0a53 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:01 newest-cni-956615 crio[522]: time="2025-11-23T10:19:01.556730737Z" level=info msg="Started container" PID=1052 containerID=78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189 description=kube-system/kube-proxy-ktlnh/kube-proxy id=14137c3b-be2d-4878-9c6f-b7586d9d0a53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f537ce5071d7ca9de345c82e2f0fb9a89653e3eee549c2d28151f41244b821f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b05323c6c0ab1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   671e983c6fc90       kindnet-pfcv2                               kube-system
	78484bdafb835       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   2f537ce5071d7       kube-proxy-ktlnh                            kube-system
	568d4e2e13f79       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   9883c90ab3982       kube-scheduler-newest-cni-956615            kube-system
	cc3d50e3b18ae       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   9782d216f2d59       kube-controller-manager-newest-cni-956615   kube-system
	ab7965c57730d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   3c615633e20d4       etcd-newest-cni-956615                      kube-system
	3e6bea1c70004       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   fdc95a1d7fc54       kube-apiserver-newest-cni-956615            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-956615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-956615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=newest-cni-956615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_18_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:18:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-956615
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:19:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:19:00 +0000   Sun, 23 Nov 2025 10:18:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-956615
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a5d2206a-9559-4bfb-833b-e4a1b122ea26
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-956615                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-pfcv2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-956615             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-956615    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-ktlnh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-956615             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 26s              kube-proxy       
	  Normal  Starting                 5s               kube-proxy       
	  Normal  Starting                 33s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s              kubelet          Node newest-cni-956615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s              kubelet          Node newest-cni-956615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s              kubelet          Node newest-cni-956615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s              node-controller  Node newest-cni-956615 event: Registered Node newest-cni-956615 in Controller
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet          Node newest-cni-956615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet          Node newest-cni-956615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)  kubelet          Node newest-cni-956615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s               node-controller  Node newest-cni-956615 event: Registered Node newest-cni-956615 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0] <==
	{"level":"warn","ts":"2025-11-23T10:18:59.647121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.656162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.662607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.672377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.675611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.682421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.689558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.696555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.704456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.713240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.720712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.727879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.734787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.741246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.748731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.755902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.763454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.770016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.777204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.784126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.791341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.812563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.819749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.828957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:59.882662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:19:07 up  3:01,  0 user,  load average: 3.63, 4.69, 3.00
	Linux newest-cni-956615 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b05323c6c0ab1938eb1588a7d2d3feb32f80596ed02fed4cce85977e5a3e22b2] <==
	I1123 10:19:01.821981       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:19:01.822344       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:19:01.822465       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:19:01.822479       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:19:01.822501       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:19:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:19:01.934491       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:19:01.934547       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:19:01.934560       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:19:02.021879       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:19:02.391299       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:19:02.391322       1 metrics.go:72] Registering metrics
	I1123 10:19:02.391398       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403] <==
	I1123 10:19:00.417033       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:19:00.417622       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:19:00.417671       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:19:00.417680       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:19:00.417700       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:19:00.417707       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:19:00.427588       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:19:00.447509       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:19:00.455755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:19:00.455788       1 policy_source.go:240] refreshing policies
	I1123 10:19:00.456310       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:19:00.498407       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:19:00.498438       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:19:00.679255       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:19:00.707416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:19:00.724218       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:19:00.731313       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:19:00.737543       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:19:00.766682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.155.50"}
	I1123 10:19:00.776221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.61.6"}
	I1123 10:19:01.303197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:19:03.089995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:19:03.488566       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:19:03.488566       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:19:03.589554       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f] <==
	I1123 10:19:03.083438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:19:03.085716       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:19:03.085746       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:19:03.085789       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:19:03.085803       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:19:03.086083       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:19:03.086440       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:19:03.086547       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:19:03.086812       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:19:03.087812       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:19:03.090020       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:19:03.092300       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:19:03.092312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:19:03.092474       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:19:03.092568       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:19:03.092585       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:19:03.092594       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:19:03.095321       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:19:03.095339       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:19:03.095347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:19:03.097682       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:19:03.102969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:19:03.104177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:19:03.105268       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:19:03.107565       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [78484bdafb835b4a204df4db8a2d43436469113977af7e007b407536a0297189] <==
	I1123 10:19:01.590458       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:19:01.660909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:19:01.761961       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:19:01.762008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:19:01.762154       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:19:01.786038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:19:01.786126       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:19:01.791325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:19:01.791627       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:19:01.791661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:19:01.792897       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:19:01.792918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:19:01.792955       1 config.go:200] "Starting service config controller"
	I1123 10:19:01.792968       1 config.go:309] "Starting node config controller"
	I1123 10:19:01.792982       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:19:01.792988       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:19:01.792990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:19:01.792997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:19:01.792964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:19:01.893989       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:19:01.894120       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:19:01.894246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458] <==
	I1123 10:18:59.115184       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:19:00.325617       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:19:00.327456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:19:00.327485       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:19:00.327497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:19:00.373437       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:19:00.377158       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:19:00.380122       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:19:00.380158       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:19:00.386881       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:19:00.387130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:19:00.389425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:19:00.398987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 10:19:01.481164       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.230326     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-956615\" not found" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.391490     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.527801     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956615\" already exists" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.527851     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530247     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530359     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.530404     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.531338     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.535602     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-956615\" already exists" pod="kube-system/kube-controller-manager-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.535636     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.542984     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-956615\" already exists" pod="kube-system/kube-scheduler-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: I1123 10:19:00.543021     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:19:00 newest-cni-956615 kubelet[677]: E1123 10:19:00.549949     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-956615\" already exists" pod="kube-system/etcd-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.186156     677 apiserver.go:52] "Watching apiserver"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.191180     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.230340     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: E1123 10:19:01.235637     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-956615\" already exists" pod="kube-system/kube-apiserver-newest-cni-956615"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244360     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-lib-modules\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244441     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-cni-cfg\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244616     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-lib-modules\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244772     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7b0e9b-f2f8-4b3f-92d0-691144b655a6-xtables-lock\") pod \"kube-proxy-ktlnh\" (UID: \"ca7b0e9b-f2f8-4b3f-92d0-691144b655a6\") " pod="kube-system/kube-proxy-ktlnh"
	Nov 23 10:19:01 newest-cni-956615 kubelet[677]: I1123 10:19:01.244806     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ef87c-1b75-4bb7-bafc-049f36caebc5-xtables-lock\") pod \"kindnet-pfcv2\" (UID: \"5b3ef87c-1b75-4bb7-bafc-049f36caebc5\") " pod="kube-system/kindnet-pfcv2"
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:19:02 newest-cni-956615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956615 -n newest-cni-956615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-956615 -n newest-cni-956615: exit status 2 (327.301954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-956615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2: exit status 1 (61.513041ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-f5fbv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tzd8g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-z66k2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-956615 describe pod coredns-66bc5c9577-f5fbv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tzd8g kubernetes-dashboard-855c9754f9-z66k2: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-772252 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-772252 --alsologtostderr -v=1: exit status 80 (2.329315704s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-772252 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:19:23.985906  398653 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:19:23.986025  398653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:23.986034  398653 out.go:374] Setting ErrFile to fd 2...
	I1123 10:19:23.986038  398653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:19:23.986301  398653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:19:23.986534  398653 out.go:368] Setting JSON to false
	I1123 10:19:23.986558  398653 mustload.go:66] Loading cluster: default-k8s-diff-port-772252
	I1123 10:19:23.986869  398653 config.go:182] Loaded profile config "default-k8s-diff-port-772252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:19:23.987256  398653 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772252 --format={{.State.Status}}
	I1123 10:19:24.005007  398653 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:19:24.005305  398653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:19:24.061207  398653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 10:19:24.051617331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:19:24.061818  398653 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-772252 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:19:24.063677  398653 out.go:179] * Pausing node default-k8s-diff-port-772252 ... 
	I1123 10:19:24.064918  398653 host.go:66] Checking if "default-k8s-diff-port-772252" exists ...
	I1123 10:19:24.065214  398653 ssh_runner.go:195] Run: systemctl --version
	I1123 10:19:24.065268  398653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772252
	I1123 10:19:24.083842  398653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/default-k8s-diff-port-772252/id_rsa Username:docker}
	I1123 10:19:24.183598  398653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:24.196074  398653 pause.go:52] kubelet running: true
	I1123 10:19:24.196156  398653 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:24.354594  398653 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:24.354666  398653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:24.420294  398653 cri.go:89] found id: "ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004"
	I1123 10:19:24.420338  398653 cri.go:89] found id: "f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4"
	I1123 10:19:24.420344  398653 cri.go:89] found id: "6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	I1123 10:19:24.420347  398653 cri.go:89] found id: "d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c"
	I1123 10:19:24.420350  398653 cri.go:89] found id: "245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0"
	I1123 10:19:24.420353  398653 cri.go:89] found id: "ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98"
	I1123 10:19:24.420356  398653 cri.go:89] found id: "7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88"
	I1123 10:19:24.420358  398653 cri.go:89] found id: "a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a"
	I1123 10:19:24.420361  398653 cri.go:89] found id: "7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29"
	I1123 10:19:24.420367  398653 cri.go:89] found id: "7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	I1123 10:19:24.420370  398653 cri.go:89] found id: "d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75"
	I1123 10:19:24.420372  398653 cri.go:89] found id: ""
	I1123 10:19:24.420422  398653 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:24.432202  398653 retry.go:31] will retry after 276.282283ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:24Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:24.708685  398653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:24.721360  398653 pause.go:52] kubelet running: false
	I1123 10:19:24.721415  398653 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:24.861739  398653 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:24.861845  398653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:24.926248  398653 cri.go:89] found id: "ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004"
	I1123 10:19:24.926287  398653 cri.go:89] found id: "f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4"
	I1123 10:19:24.926293  398653 cri.go:89] found id: "6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	I1123 10:19:24.926296  398653 cri.go:89] found id: "d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c"
	I1123 10:19:24.926299  398653 cri.go:89] found id: "245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0"
	I1123 10:19:24.926303  398653 cri.go:89] found id: "ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98"
	I1123 10:19:24.926306  398653 cri.go:89] found id: "7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88"
	I1123 10:19:24.926308  398653 cri.go:89] found id: "a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a"
	I1123 10:19:24.926311  398653 cri.go:89] found id: "7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29"
	I1123 10:19:24.926328  398653 cri.go:89] found id: "7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	I1123 10:19:24.926331  398653 cri.go:89] found id: "d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75"
	I1123 10:19:24.926334  398653 cri.go:89] found id: ""
	I1123 10:19:24.926375  398653 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:24.937760  398653 retry.go:31] will retry after 528.843944ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:24Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:25.467575  398653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:25.480613  398653 pause.go:52] kubelet running: false
	I1123 10:19:25.480682  398653 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:25.616542  398653 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:25.616621  398653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:25.680959  398653 cri.go:89] found id: "ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004"
	I1123 10:19:25.680982  398653 cri.go:89] found id: "f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4"
	I1123 10:19:25.680987  398653 cri.go:89] found id: "6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	I1123 10:19:25.680993  398653 cri.go:89] found id: "d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c"
	I1123 10:19:25.680997  398653 cri.go:89] found id: "245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0"
	I1123 10:19:25.681002  398653 cri.go:89] found id: "ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98"
	I1123 10:19:25.681006  398653 cri.go:89] found id: "7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88"
	I1123 10:19:25.681009  398653 cri.go:89] found id: "a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a"
	I1123 10:19:25.681014  398653 cri.go:89] found id: "7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29"
	I1123 10:19:25.681030  398653 cri.go:89] found id: "7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	I1123 10:19:25.681035  398653 cri.go:89] found id: "d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75"
	I1123 10:19:25.681047  398653 cri.go:89] found id: ""
	I1123 10:19:25.681102  398653 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:25.692651  398653 retry.go:31] will retry after 328.521729ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:25Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:19:26.022283  398653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:19:26.035103  398653 pause.go:52] kubelet running: false
	I1123 10:19:26.035168  398653 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:19:26.171659  398653 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:19:26.171738  398653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:19:26.235764  398653 cri.go:89] found id: "ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004"
	I1123 10:19:26.235792  398653 cri.go:89] found id: "f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4"
	I1123 10:19:26.235797  398653 cri.go:89] found id: "6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	I1123 10:19:26.235800  398653 cri.go:89] found id: "d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c"
	I1123 10:19:26.235803  398653 cri.go:89] found id: "245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0"
	I1123 10:19:26.235806  398653 cri.go:89] found id: "ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98"
	I1123 10:19:26.235809  398653 cri.go:89] found id: "7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88"
	I1123 10:19:26.235812  398653 cri.go:89] found id: "a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a"
	I1123 10:19:26.235815  398653 cri.go:89] found id: "7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29"
	I1123 10:19:26.235821  398653 cri.go:89] found id: "7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	I1123 10:19:26.235824  398653 cri.go:89] found id: "d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75"
	I1123 10:19:26.235827  398653 cri.go:89] found id: ""
	I1123 10:19:26.235865  398653 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:19:26.249118  398653 out.go:203] 
	W1123 10:19:26.250128  398653 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:19:26.250143  398653 out.go:285] * 
	* 
	W1123 10:19:26.254725  398653 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:19:26.255672  398653 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-772252 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772252
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	        "Created": "2025-11-23T10:17:18.483940214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:18:29.253134412Z",
	            "FinishedAt": "2025-11-23T10:18:28.357069206Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hosts",
	        "LogPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd-json.log",
	        "Name": "/default-k8s-diff-port-772252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-772252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	                "LowerDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c474cdbbbb8b5ed92c2965845c72ac246edc6e0e1e2cda55c1963505cc4efda2",
	            "SandboxKey": "/var/run/docker/netns/c474cdbbbb8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-772252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02dae00b49d770f584e401c586980c0831e8332aaaff622d8a3a7b262132c748",
	                    "EndpointID": "5c7093a605be83533f84814602b8ab0197add586525e0eecb52d19f90d7133ce",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:d2:74:d1:6a:eb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772252",
	                        "e477e779f8bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252: exit status 2 (321.226127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25: (1.042019159s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p newest-cni-956615 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-956615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ newest-cni-956615 image list --format=json                                                                                                                                                                                                    │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-956615 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ delete  │ -p newest-cni-956615                                                                                                                                                                                                                          │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ delete  │ -p newest-cni-956615                                                                                                                                                                                                                          │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ default-k8s-diff-port-772252 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p default-k8s-diff-port-772252 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:51.922533  394315 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:51.922773  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922782  394315 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:51.922786  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922982  394315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:51.923464  394315 out.go:368] Setting JSON to false
	I1123 10:18:51.924704  394315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10873,"bootTime":1763882259,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:51.924759  394315 start.go:143] virtualization: kvm guest
	I1123 10:18:51.926884  394315 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:51.928337  394315 notify.go:221] Checking for updates...
	I1123 10:18:51.928373  394315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:51.929751  394315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:51.931020  394315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:51.932349  394315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:51.933744  394315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:51.935099  394315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:51.936795  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:51.937407  394315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:51.961344  394315 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:51.961523  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.019286  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.009047301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.019399  394315 docker.go:319] overlay module found
	I1123 10:18:52.021270  394315 out.go:179] * Using the docker driver based on existing profile
	I1123 10:18:52.022550  394315 start.go:309] selected driver: docker
	I1123 10:18:52.022565  394315 start.go:927] validating driver "docker" against &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.022649  394315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:52.023207  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.080543  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.070324364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.080908  394315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:52.080949  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:52.081035  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:52.081106  394315 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.083159  394315 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:52.084255  394315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:52.085479  394315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:52.086568  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:52.086596  394315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:52.086606  394315 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:52.086653  394315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:52.086679  394315 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:52.086690  394315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:52.086776  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.108195  394315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:52.108214  394315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:52.108225  394315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:52.108262  394315 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:52.108312  394315 start.go:364] duration metric: took 32.687µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:52.108328  394315 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:18:52.108334  394315 fix.go:54] fixHost starting: 
	I1123 10:18:52.108536  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.125249  394315 fix.go:112] recreateIfNeeded on newest-cni-956615: state=Stopped err=<nil>
	W1123 10:18:52.125297  394315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:18:50.342961  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:52.842822  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:52.127162  394315 out.go:252] * Restarting existing docker container for "newest-cni-956615" ...
	I1123 10:18:52.127226  394315 cli_runner.go:164] Run: docker start newest-cni-956615
	I1123 10:18:52.396853  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.415351  394315 kic.go:430] container "newest-cni-956615" state is running.
	I1123 10:18:52.415793  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:52.434420  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.434630  394315 machine.go:94] provisionDockerMachine start ...
	I1123 10:18:52.434722  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:52.453553  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:52.453858  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:52.453876  394315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:18:52.454582  394315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46492->127.0.0.1:33135: read: connection reset by peer
	I1123 10:18:55.599296  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.599336  394315 ubuntu.go:182] provisioning hostname "newest-cni-956615"
	I1123 10:18:55.599394  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.618738  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.618993  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.619012  394315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956615 && echo "newest-cni-956615" | sudo tee /etc/hostname
	I1123 10:18:55.770698  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.770811  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.788813  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.789027  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.789043  394315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:18:55.932742  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:18:55.932777  394315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:18:55.932804  394315 ubuntu.go:190] setting up certificates
	I1123 10:18:55.932828  394315 provision.go:84] configureAuth start
	I1123 10:18:55.932895  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:55.950646  394315 provision.go:143] copyHostCerts
	I1123 10:18:55.950720  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:18:55.950739  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:18:55.950807  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:18:55.950927  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:18:55.950935  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:18:55.950963  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:18:55.951043  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:18:55.951050  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:18:55.951084  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:18:55.951181  394315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956615 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956615]
	I1123 10:18:55.985638  394315 provision.go:177] copyRemoteCerts
	I1123 10:18:55.985691  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:18:55.985729  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.003060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.105036  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:18:56.122557  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:18:56.139483  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:18:56.157358  394315 provision.go:87] duration metric: took 224.510848ms to configureAuth
	I1123 10:18:56.157392  394315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:18:56.157621  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:56.157753  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.175573  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:56.175795  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:56.175812  394315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:18:56.475612  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:18:56.475643  394315 machine.go:97] duration metric: took 4.040999325s to provisionDockerMachine
	I1123 10:18:56.475663  394315 start.go:293] postStartSetup for "newest-cni-956615" (driver="docker")
	I1123 10:18:56.475674  394315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:18:56.475746  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:18:56.475803  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.493158  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.593217  394315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:18:56.596801  394315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:18:56.596832  394315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:18:56.596844  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:18:56.596895  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:18:56.596983  394315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:18:56.597076  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:18:56.604613  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:56.621377  394315 start.go:296] duration metric: took 145.698257ms for postStartSetup
	I1123 10:18:56.621453  394315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:18:56.621507  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.639509  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.736903  394315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:18:56.741519  394315 fix.go:56] duration metric: took 4.633176884s for fixHost
	I1123 10:18:56.741547  394315 start.go:83] releasing machines lock for "newest-cni-956615", held for 4.633224185s
	I1123 10:18:56.741639  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:56.759242  394315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:18:56.759292  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.759313  394315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:18:56.759380  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.777311  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.778060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.925608  394315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:56.933469  394315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:18:56.968444  394315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:18:56.973374  394315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:18:56.973443  394315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:18:56.981566  394315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:18:56.981589  394315 start.go:496] detecting cgroup driver to use...
	I1123 10:18:56.981627  394315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:18:56.981686  394315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:18:56.995837  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:18:57.008368  394315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:18:57.008418  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:18:57.023133  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:18:57.035490  394315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:18:57.115630  394315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:18:57.196692  394315 docker.go:234] disabling docker service ...
	I1123 10:18:57.196779  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:18:57.212027  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:18:57.224568  394315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:18:57.304246  394315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:18:57.383429  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:18:57.395933  394315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:18:57.410060  394315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:18:57.410151  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.419364  394315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:18:57.419416  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.428434  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.437359  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.446280  394315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:18:57.454724  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.463785  394315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.472508  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.481248  394315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:18:57.488803  394315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:18:57.496308  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:57.573983  394315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:18:57.718163  394315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:18:57.718238  394315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:18:57.722219  394315 start.go:564] Will wait 60s for crictl version
	I1123 10:18:57.722278  394315 ssh_runner.go:195] Run: which crictl
	I1123 10:18:57.726031  394315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:18:57.751027  394315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:18:57.751130  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.778633  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.806895  394315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:18:57.807958  394315 cli_runner.go:164] Run: docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:57.825213  394315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:18:57.829406  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:57.841175  394315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 10:18:57.842167  394315 kubeadm.go:884] updating cluster {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:18:57.842312  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:57.842362  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.874472  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.874497  394315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:18:57.874557  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.899498  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.899520  394315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:18:57.899529  394315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:18:57.899664  394315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:18:57.899753  394315 ssh_runner.go:195] Run: crio config
	I1123 10:18:57.945307  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:57.945334  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:57.945353  394315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:18:57.945385  394315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956615 NodeName:newest-cni-956615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:18:57.945529  394315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:18:57.945603  394315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:18:57.954040  394315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:18:57.954111  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:18:57.962312  394315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:18:57.974790  394315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:18:57.987293  394315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 10:18:57.999467  394315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:18:58.003369  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:58.012965  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.094317  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:58.124328  394315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615 for IP: 192.168.76.2
	I1123 10:18:58.124349  394315 certs.go:195] generating shared ca certs ...
	I1123 10:18:58.124370  394315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.124522  394315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:18:58.124600  394315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:18:58.124620  394315 certs.go:257] generating profile certs ...
	I1123 10:18:58.124722  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/client.key
	I1123 10:18:58.124804  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key.27a853cb
	I1123 10:18:58.124856  394315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key
	I1123 10:18:58.124994  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:18:58.125036  394315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:18:58.125052  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:18:58.125113  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:18:58.125156  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:18:58.125191  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:18:58.125250  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:58.125897  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:18:58.144169  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:18:58.162839  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:18:58.181511  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:18:58.206364  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:18:58.224546  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:18:58.241212  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:18:58.257774  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:18:58.274527  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:18:58.291570  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:18:58.309143  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:18:58.327593  394315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:18:58.340335  394315 ssh_runner.go:195] Run: openssl version
	I1123 10:18:58.346917  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:18:58.355590  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359305  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359346  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.394024  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:18:58.402117  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:18:58.410347  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.413983  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.414033  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.447559  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:18:58.455430  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:18:58.463887  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467518  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467569  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.502214  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:18:58.510610  394315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:18:58.514564  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:18:58.548572  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:18:58.582475  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:18:58.617633  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:18:58.663551  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:18:58.706433  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:18:58.755355  394315 kubeadm.go:401] StartCluster: {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:58.755458  394315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:18:58.755534  394315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:18:58.792290  394315 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:18:58.792321  394315 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:18:58.792327  394315 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:18:58.792332  394315 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:18:58.792336  394315 cri.go:89] found id: ""
	I1123 10:18:58.792387  394315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:18:58.806842  394315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:58.806912  394315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:18:58.815260  394315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:18:58.815280  394315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:18:58.815325  394315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:18:58.822691  394315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:18:58.823363  394315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956615" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.823632  394315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956615" cluster setting kubeconfig missing "newest-cni-956615" context setting]
	I1123 10:18:58.824148  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.825412  394315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:18:58.833345  394315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:18:58.833374  394315 kubeadm.go:602] duration metric: took 18.088164ms to restartPrimaryControlPlane
	I1123 10:18:58.833384  394315 kubeadm.go:403] duration metric: took 78.041992ms to StartCluster
	I1123 10:18:58.833401  394315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.833464  394315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.834283  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.834490  394315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:58.834556  394315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:58.834673  394315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956615"
	I1123 10:18:58.834693  394315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956615"
	W1123 10:18:58.834705  394315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:18:58.834716  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:58.834736  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.834733  394315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956615"
	I1123 10:18:58.834767  394315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956615"
	I1123 10:18:58.834749  394315 addons.go:70] Setting dashboard=true in profile "newest-cni-956615"
	I1123 10:18:58.834803  394315 addons.go:239] Setting addon dashboard=true in "newest-cni-956615"
	W1123 10:18:58.834825  394315 addons.go:248] addon dashboard should already be in state true
	I1123 10:18:58.834866  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.835064  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835255  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835473  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.838196  394315 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:58.839321  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.862172  394315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956615"
	W1123 10:18:58.862197  394315 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:18:58.862226  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.862714  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.863432  394315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:58.863504  394315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:18:58.864523  394315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:58.864548  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:58.864608  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.865756  394315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 10:18:55.341834  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:57.342845  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:58.866799  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:18:58.866823  394315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:18:58.866899  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.896558  394315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:58.896587  394315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:58.896649  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.902289  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.906565  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.921464  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:59.001978  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:59.018766  394315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:59.018846  394315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:59.020845  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:18:59.020869  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:18:59.027142  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:59.034971  394315 api_server.go:72] duration metric: took 200.448073ms to wait for apiserver process to appear ...
	I1123 10:18:59.035003  394315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:59.035026  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:18:59.037406  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:18:59.037477  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:18:59.039283  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:59.053902  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:18:59.053928  394315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:18:59.070168  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:18:59.070193  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:18:59.086290  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:18:59.086317  394315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:18:59.103619  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:18:59.103647  394315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:18:59.116917  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:18:59.116941  394315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:18:59.129744  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:18:59.129770  394315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:18:59.142130  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:59.142153  394315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:18:59.154836  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:19:00.310954  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.310988  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.311005  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.343552  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.343631  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.535410  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.541409  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:00.541448  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:00.873844  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.846667586s)
	I1123 10:19:00.873914  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834599768s)
	I1123 10:19:00.874012  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.71914196s)
	I1123 10:19:00.875636  394315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956615 addons enable metrics-server
	
	I1123 10:19:00.885104  394315 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:19:00.886167  394315 addons.go:530] duration metric: took 2.051621498s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:19:01.035140  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.039364  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:01.039396  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:01.535682  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.540794  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:19:01.541893  394315 api_server.go:141] control plane version: v1.34.1
	I1123 10:19:01.541921  394315 api_server.go:131] duration metric: took 2.506910717s to wait for apiserver health ...
	I1123 10:19:01.541930  394315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:19:01.545766  394315 system_pods.go:59] 8 kube-system pods found
	I1123 10:19:01.545807  394315 system_pods.go:61] "coredns-66bc5c9577-f5fbv" [a2a6f660-7d27-4ea8-b5b3-af124330c296] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545816  394315 system_pods.go:61] "etcd-newest-cni-956615" [f8a39510-5fa3-42e6-a37e-6ceb4ff74876] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:19:01.545824  394315 system_pods.go:61] "kindnet-pfcv2" [5b3ef87c-1b75-4bb7-bafc-049f36caebc5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:19:01.545831  394315 system_pods.go:61] "kube-apiserver-newest-cni-956615" [05c7eaaf-a379-4c0e-b15e-b4fd9b251e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:19:01.545842  394315 system_pods.go:61] "kube-controller-manager-newest-cni-956615" [9a577ee2-bcae-49ed-a341-0361d8b3e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:19:01.545848  394315 system_pods.go:61] "kube-proxy-ktlnh" [ca7b0e9b-f2f8-4b3f-92d0-691144b655a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:19:01.545856  394315 system_pods.go:61] "kube-scheduler-newest-cni-956615" [4eb905ef-9079-49bf-97cf-87d904882001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:19:01.545861  394315 system_pods.go:61] "storage-provisioner" [3cdc36f3-a1eb-45d6-9e02-f2c0514c2888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545870  394315 system_pods.go:74] duration metric: took 3.934068ms to wait for pod list to return data ...
	I1123 10:19:01.545877  394315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:19:01.548629  394315 default_sa.go:45] found service account: "default"
	I1123 10:19:01.548653  394315 default_sa.go:55] duration metric: took 2.766657ms for default service account to be created ...
	I1123 10:19:01.548665  394315 kubeadm.go:587] duration metric: took 2.714149617s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:19:01.548682  394315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:19:01.551434  394315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:19:01.551456  394315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:19:01.551477  394315 node_conditions.go:105] duration metric: took 2.79002ms to run NodePressure ...
	I1123 10:19:01.551492  394315 start.go:242] waiting for startup goroutines ...
	I1123 10:19:01.551505  394315 start.go:247] waiting for cluster config update ...
	I1123 10:19:01.551523  394315 start.go:256] writing updated cluster config ...
	I1123 10:19:01.551766  394315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:01.602233  394315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:01.604242  394315 out.go:179] * Done! kubectl is now configured to use "newest-cni-956615" cluster and "default" namespace by default
	W1123 10:18:59.842677  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:02.343352  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:04.842853  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:07.342852  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:09.842552  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:19:10.842708  390057 pod_ready.go:94] pod "coredns-66bc5c9577-c5c4c" is "Ready"
	I1123 10:19:10.842747  390057 pod_ready.go:86] duration metric: took 31.505776555s for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.845112  390057 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.849055  390057 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:10.849104  390057 pod_ready.go:86] duration metric: took 3.94958ms for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.850833  390057 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.854413  390057 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:10.854433  390057 pod_ready.go:86] duration metric: took 3.576307ms for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.856388  390057 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.040127  390057 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:11.040157  390057 pod_ready.go:86] duration metric: took 183.748035ms for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.241461  390057 pod_ready.go:83] waiting for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.640683  390057 pod_ready.go:94] pod "kube-proxy-xfghg" is "Ready"
	I1123 10:19:11.640712  390057 pod_ready.go:86] duration metric: took 399.222419ms for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.840965  390057 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:12.241129  390057 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:12.241162  390057 pod_ready.go:86] duration metric: took 400.165755ms for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:12.241178  390057 pod_ready.go:40] duration metric: took 32.907281835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:12.282816  390057 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:12.284590  390057 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772252" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.722883481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.766982162Z" level=info msg="Created container ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004: kube-system/storage-provisioner/storage-provisioner" id=113afcbe-1472-4699-becc-8d2d14ca3a55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.76766652Z" level=info msg="Starting container: ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004" id=a359a624-e278-4225-9abc-00962e6a02e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.769565297Z" level=info msg="Started container" PID=1721 containerID=ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004 description=kube-system/storage-provisioner/storage-provisioner id=a359a624-e278-4225-9abc-00962e6a02e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ae471e7487a3430bd5e3fad5a62006097bbbb17b421be975b989f371ee3414b
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.329854212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.334130894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.33416417Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.334190483Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337665643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337689386Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337704531Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341206103Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341232168Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341249573Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344430934Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344450941Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344472842Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347758298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347779628Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347793938Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351071478Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351122633Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351145315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.354411156Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.354434299Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ecfb1f7191713       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   6ae471e7487a3       storage-provisioner                                    kube-system
	7c3d5b52c5c83       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   60d293fad6dbf       dashboard-metrics-scraper-6ffb444bf9-4jppx             kubernetes-dashboard
	d1e84d4b33a1e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   78989dc172a98       kubernetes-dashboard-855c9754f9-cbx67                  kubernetes-dashboard
	3685dba7ef10e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   93514dc45e27f       busybox                                                default
	f06dad898472c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   a911f6d1afda9       coredns-66bc5c9577-c5c4c                               kube-system
	6aaac7d5aab2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   6ae471e7487a3       storage-provisioner                                    kube-system
	d43792ab06a60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   5bfd5807a255d       kindnet-4dnjf                                          kube-system
	245e87d8d135a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   d3919e0a158a9       kube-proxy-xfghg                                       kube-system
	ca0b7481c92ff       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   40a2ff1788463       kube-controller-manager-default-k8s-diff-port-772252   kube-system
	7a142a8a31476       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   4a48754e5b95d       etcd-default-k8s-diff-port-772252                      kube-system
	a176b6c574c4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   2611119415e85       kube-scheduler-default-k8s-diff-port-772252            kube-system
	7db7bd227bf9f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   3760be18c1f1e       kube-apiserver-default-k8s-diff-port-772252            kube-system
	
	
	==> coredns [f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36393 - 42019 "HINFO IN 3419469779534895088.3506069302199184689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032435928s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-772252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-772252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772252
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-772252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                752b5ee7-1a37-4c91-8868-54a0bdb64fb2
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-c5c4c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-772252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-4dnjf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-xfghg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4jppx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cbx67                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node default-k8s-diff-port-772252 event: Registered Node default-k8s-diff-port-772252 in Controller
	  Normal  NodeReady                92s                  kubelet          Node default-k8s-diff-port-772252 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node default-k8s-diff-port-772252 event: Registered Node default-k8s-diff-port-772252 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88] <==
	{"level":"warn","ts":"2025-11-23T10:18:37.140236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.146449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.153792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.160222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.166390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.172523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.179304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.190268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.197016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.204299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.213432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.220735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.228939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.235962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.242947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.250589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.257011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.264035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.270938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.285244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.291400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.298115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.354296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:50.985191Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.921054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-c5c4c\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-11-23T10:18:50.985317Z","caller":"traceutil/trace.go:172","msg":"trace[184269341] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-c5c4c; range_end:; response_count:1; response_revision:630; }","duration":"146.079682ms","start":"2025-11-23T10:18:50.839216Z","end":"2025-11-23T10:18:50.985296Z","steps":["trace[184269341] 'range keys from in-memory index tree'  (duration: 145.646886ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:19:27 up  3:01,  0 user,  load average: 2.68, 4.40, 2.94
	Linux default-k8s-diff-port-772252 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c] <==
	I1123 10:18:39.123404       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:18:39.123672       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 10:18:39.123831       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:18:39.123851       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:18:39.123873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:18:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:18:39.324378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:18:39.324452       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:18:39.324464       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:18:39.324628       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:19:09.325207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:19:09.325214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:19:09.325212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:19:09.325212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 10:19:10.924988       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:19:10.925013       1 metrics.go:72] Registering metrics
	I1123 10:19:10.925053       1 controller.go:711] "Syncing nftables rules"
	I1123 10:19:19.329546       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 10:19:19.329586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29] <==
	I1123 10:18:37.822704       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:18:37.822728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:18:37.823576       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:18:37.823596       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:18:37.823602       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:18:37.823609       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:18:37.822143       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:18:37.822224       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1123 10:18:37.829792       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:18:37.832047       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:18:37.862953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:37.877847       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:18:37.907524       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:18:38.114713       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:18:38.141271       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:18:38.160602       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:18:38.167250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:18:38.173351       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:18:38.204644       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.118.3"}
	I1123 10:18:38.213191       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.125.107"}
	I1123 10:18:38.724559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:18:41.201327       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:18:41.201384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:18:41.553436       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:18:41.751467       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98] <==
	I1123 10:18:41.169974       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:18:41.172225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:18:41.174452       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:18:41.176650       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:18:41.179243       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:18:41.183058       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:18:41.197534       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:18:41.197558       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:18:41.197592       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:18:41.197680       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:18:41.197761       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:18:41.199007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:18:41.199036       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:18:41.199063       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:18:41.199158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:18:41.199166       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:18:41.199188       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:18:41.199786       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:18:41.200490       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:18:41.200517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:18:41.201652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:41.201659       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:18:41.203276       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:18:41.205130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:18:41.213614       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0] <==
	I1123 10:18:38.982884       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:18:39.056229       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:18:39.156824       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:18:39.156867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 10:18:39.157002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:18:39.187356       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:18:39.187447       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:18:39.192776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:18:39.193224       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:18:39.193282       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:18:39.195033       1 config.go:200] "Starting service config controller"
	I1123 10:18:39.195082       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:18:39.195151       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:18:39.195163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:18:39.195161       1 config.go:309] "Starting node config controller"
	I1123 10:18:39.195179       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:18:39.195184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:18:39.195180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:18:39.295692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:18:39.295732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:18:39.295730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:18:39.295766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a] <==
	I1123 10:18:37.778294       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:18:37.781195       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:18:37.781290       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:18:37.782372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:18:37.782463       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:18:37.784892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:18:37.784698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:18:37.791364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:18:37.791388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:18:37.796752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:18:37.797216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:18:37.797417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:18:37.797417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:18:37.797788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:18:37.798065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:18:37.798111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:18:37.798155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:18:37.798286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:18:37.800376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:18:37.801839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:18:37.801957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:18:37.802219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:18:37.802422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:18:37.808320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 10:18:39.081940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779779     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a366b58-3166-4114-bd99-9b1dd0648311-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cbx67\" (UID: \"1a366b58-3166-4114-bd99-9b1dd0648311\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779822     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgf7d\" (UniqueName: \"kubernetes.io/projected/1a366b58-3166-4114-bd99-9b1dd0648311-kube-api-access-tgf7d\") pod \"kubernetes-dashboard-855c9754f9-cbx67\" (UID: \"1a366b58-3166-4114-bd99-9b1dd0648311\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779842     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52bx\" (UniqueName: \"kubernetes.io/projected/57299988-81da-4cbe-b187-b18dcc5efda2-kube-api-access-x52bx\") pod \"dashboard-metrics-scraper-6ffb444bf9-4jppx\" (UID: \"57299988-81da-4cbe-b187-b18dcc5efda2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779862     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/57299988-81da-4cbe-b187-b18dcc5efda2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4jppx\" (UID: \"57299988-81da-4cbe-b187-b18dcc5efda2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx"
	Nov 23 10:18:46 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:46.656516     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67" podStartSLOduration=1.901573805 podStartE2EDuration="5.656492237s" podCreationTimestamp="2025-11-23 10:18:41 +0000 UTC" firstStartedPulling="2025-11-23 10:18:42.001306075 +0000 UTC m=+6.514469531" lastFinishedPulling="2025-11-23 10:18:45.756224517 +0000 UTC m=+10.269387963" observedRunningTime="2025-11-23 10:18:46.656430407 +0000 UTC m=+11.169593871" watchObservedRunningTime="2025-11-23 10:18:46.656492237 +0000 UTC m=+11.169655701"
	Nov 23 10:18:48 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:48.650338     731 scope.go:117] "RemoveContainer" containerID="7ae9105073053f5c93ef114fdbc842989f5ec3e066b1bf8f9adef906a76cd6e8"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:49.654555     731 scope.go:117] "RemoveContainer" containerID="7ae9105073053f5c93ef114fdbc842989f5ec3e066b1bf8f9adef906a76cd6e8"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:49.654743     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:49.654974     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:18:50 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:50.659212     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:50 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:50.659368     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:18:52 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:52.739119     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:52 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:52.739334     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.593786     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.698114     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.698323     731 scope.go:117] "RemoveContainer" containerID="7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: E1123 10:19:04.698519     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:09 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:09.714547     731 scope.go:117] "RemoveContainer" containerID="6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	Nov 23 10:19:12 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:12.739491     731 scope.go:117] "RemoveContainer" containerID="7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	Nov 23 10:19:12 default-k8s-diff-port-772252 kubelet[731]: E1123 10:19:12.739718     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:19:24 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:24.333051     731 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: kubelet.service: Consumed 1.531s CPU time.
	
	
	==> kubernetes-dashboard [d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75] <==
	2025/11/23 10:18:45 Starting overwatch
	2025/11/23 10:18:45 Using namespace: kubernetes-dashboard
	2025/11/23 10:18:45 Using in-cluster config to connect to apiserver
	2025/11/23 10:18:45 Using secret token for csrf signing
	2025/11/23 10:18:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:18:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:18:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:18:45 Generating JWE encryption key
	2025/11/23 10:18:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:18:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:18:45 Initializing JWE encryption key from synchronized object
	2025/11/23 10:18:45 Creating in-cluster Sidecar client
	2025/11/23 10:18:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:18:45 Serving insecurely on HTTP port: 9090
	2025/11/23 10:19:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6] <==
	I1123 10:18:38.956817       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:19:08.958998       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004] <==
	I1123 10:19:09.782332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:19:09.789498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:19:09.789560       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:19:09.791715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:13.246558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:17.506830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:21.105820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:24.159265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.181930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.186149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:19:27.186325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:19:27.186461       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed91b9a4-76da-498a-b1ac-8ef14ef3f49c", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71 became leader
	I1123 10:19:27.186503       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71!
	W1123 10:19:27.188436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.191590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:19:27.286813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252: exit status 2 (326.135899ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772252
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	        "Created": "2025-11-23T10:17:18.483940214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:18:29.253134412Z",
	            "FinishedAt": "2025-11-23T10:18:28.357069206Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/hosts",
	        "LogPath": "/var/lib/docker/containers/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd/e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd-json.log",
	        "Name": "/default-k8s-diff-port-772252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-772252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e477e779f8bb284cda7af0a956566e37954aab2553cf746d0ae2cffb94c6e8bd",
	                "LowerDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526-init/diff:/var/lib/docker/overlay2/fa24abb4c55f78a010c7e2a32f724b8d5e912441e40bb77877899b0e5f3a9c8d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/361c50e32123a50aa7fcfec243d28300895e72a7fd05ca5549049a366f302526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c474cdbbbb8b5ed92c2965845c72ac246edc6e0e1e2cda55c1963505cc4efda2",
	            "SandboxKey": "/var/run/docker/netns/c474cdbbbb8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-772252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02dae00b49d770f584e401c586980c0831e8332aaaff622d8a3a7b262132c748",
	                    "EndpointID": "5c7093a605be83533f84814602b8ab0197add586525e0eecb52d19f90d7133ce",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:d2:74:d1:6a:eb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772252",
	                        "e477e779f8bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252: exit status 2 (324.62889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-772252 logs -n 25: (1.029623515s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ no-preload-541522 image list --format=json                                                                                                                                                                                                    │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p no-preload-541522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p old-k8s-version-990757                                                                                                                                                                                                                     │ old-k8s-version-990757       │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ image   │ embed-certs-412306 image list --format=json                                                                                                                                                                                                   │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ pause   │ -p embed-certs-412306 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p no-preload-541522                                                                                                                                                                                                                          │ no-preload-541522            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ delete  │ -p embed-certs-412306                                                                                                                                                                                                                         │ embed-certs-412306           │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-956615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │                     │
	│ stop    │ -p newest-cni-956615 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-956615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:18 UTC │
	│ start   │ -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:18 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ newest-cni-956615 image list --format=json                                                                                                                                                                                                    │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p newest-cni-956615 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	│ delete  │ -p newest-cni-956615                                                                                                                                                                                                                          │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ delete  │ -p newest-cni-956615                                                                                                                                                                                                                          │ newest-cni-956615            │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ image   │ default-k8s-diff-port-772252 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │ 23 Nov 25 10:19 UTC │
	│ pause   │ -p default-k8s-diff-port-772252 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-772252 │ jenkins │ v1.37.0 │ 23 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:18:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:18:51.922533  394315 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:18:51.922773  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922782  394315 out.go:374] Setting ErrFile to fd 2...
	I1123 10:18:51.922786  394315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:18:51.922982  394315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:18:51.923464  394315 out.go:368] Setting JSON to false
	I1123 10:18:51.924704  394315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10873,"bootTime":1763882259,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:18:51.924759  394315 start.go:143] virtualization: kvm guest
	I1123 10:18:51.926884  394315 out.go:179] * [newest-cni-956615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:18:51.928337  394315 notify.go:221] Checking for updates...
	I1123 10:18:51.928373  394315 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:18:51.929751  394315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:18:51.931020  394315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:51.932349  394315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:18:51.933744  394315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:18:51.935099  394315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:18:51.936795  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:51.937407  394315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:18:51.961344  394315 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:18:51.961523  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.019286  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.009047301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.019399  394315 docker.go:319] overlay module found
	I1123 10:18:52.021270  394315 out.go:179] * Using the docker driver based on existing profile
	I1123 10:18:52.022550  394315 start.go:309] selected driver: docker
	I1123 10:18:52.022565  394315 start.go:927] validating driver "docker" against &{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.022649  394315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:18:52.023207  394315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:18:52.080543  394315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 10:18:52.070324364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:18:52.080908  394315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:18:52.080949  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:52.081035  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:52.081106  394315 start.go:353] cluster config:
	{Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:52.083159  394315 out.go:179] * Starting "newest-cni-956615" primary control-plane node in "newest-cni-956615" cluster
	I1123 10:18:52.084255  394315 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:18:52.085479  394315 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:18:52.086568  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:52.086596  394315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 10:18:52.086606  394315 cache.go:65] Caching tarball of preloaded images
	I1123 10:18:52.086653  394315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:18:52.086679  394315 preload.go:238] Found /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 10:18:52.086690  394315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:18:52.086776  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.108195  394315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:18:52.108214  394315 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:18:52.108225  394315 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:18:52.108262  394315 start.go:360] acquireMachinesLock for newest-cni-956615: {Name:mk5c1d30234ac54be25b363f4d474b6dfbb1cb30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:18:52.108312  394315 start.go:364] duration metric: took 32.687µs to acquireMachinesLock for "newest-cni-956615"
	I1123 10:18:52.108328  394315 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:18:52.108334  394315 fix.go:54] fixHost starting: 
	I1123 10:18:52.108536  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.125249  394315 fix.go:112] recreateIfNeeded on newest-cni-956615: state=Stopped err=<nil>
	W1123 10:18:52.125297  394315 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:18:50.342961  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:52.842822  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:52.127162  394315 out.go:252] * Restarting existing docker container for "newest-cni-956615" ...
	I1123 10:18:52.127226  394315 cli_runner.go:164] Run: docker start newest-cni-956615
	I1123 10:18:52.396853  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:52.415351  394315 kic.go:430] container "newest-cni-956615" state is running.
	I1123 10:18:52.415793  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:52.434420  394315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/config.json ...
	I1123 10:18:52.434630  394315 machine.go:94] provisionDockerMachine start ...
	I1123 10:18:52.434722  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:52.453553  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:52.453858  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:52.453876  394315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:18:52.454582  394315 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46492->127.0.0.1:33135: read: connection reset by peer
	I1123 10:18:55.599296  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.599336  394315 ubuntu.go:182] provisioning hostname "newest-cni-956615"
	I1123 10:18:55.599394  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.618738  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.618993  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.619012  394315 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-956615 && echo "newest-cni-956615" | sudo tee /etc/hostname
	I1123 10:18:55.770698  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-956615
	
	I1123 10:18:55.770811  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:55.788813  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:55.789027  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:55.789043  394315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-956615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-956615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-956615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:18:55.932742  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:18:55.932777  394315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-64343/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-64343/.minikube}
	I1123 10:18:55.932804  394315 ubuntu.go:190] setting up certificates
	I1123 10:18:55.932828  394315 provision.go:84] configureAuth start
	I1123 10:18:55.932895  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:55.950646  394315 provision.go:143] copyHostCerts
	I1123 10:18:55.950720  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem, removing ...
	I1123 10:18:55.950739  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem
	I1123 10:18:55.950807  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/ca.pem (1082 bytes)
	I1123 10:18:55.950927  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem, removing ...
	I1123 10:18:55.950935  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem
	I1123 10:18:55.950963  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/cert.pem (1123 bytes)
	I1123 10:18:55.951043  394315 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem, removing ...
	I1123 10:18:55.951050  394315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem
	I1123 10:18:55.951084  394315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-64343/.minikube/key.pem (1675 bytes)
	I1123 10:18:55.951181  394315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-956615 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-956615]
	I1123 10:18:55.985638  394315 provision.go:177] copyRemoteCerts
	I1123 10:18:55.985691  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:18:55.985729  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.003060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.105036  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 10:18:56.122557  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:18:56.139483  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:18:56.157358  394315 provision.go:87] duration metric: took 224.510848ms to configureAuth
	I1123 10:18:56.157392  394315 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:18:56.157621  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:56.157753  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.175573  394315 main.go:143] libmachine: Using SSH client type: native
	I1123 10:18:56.175795  394315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1123 10:18:56.175812  394315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:18:56.475612  394315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:18:56.475643  394315 machine.go:97] duration metric: took 4.040999325s to provisionDockerMachine
	I1123 10:18:56.475663  394315 start.go:293] postStartSetup for "newest-cni-956615" (driver="docker")
	I1123 10:18:56.475674  394315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:18:56.475746  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:18:56.475803  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.493158  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.593217  394315 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:18:56.596801  394315 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:18:56.596832  394315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:18:56.596844  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/addons for local assets ...
	I1123 10:18:56.596895  394315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-64343/.minikube/files for local assets ...
	I1123 10:18:56.596983  394315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem -> 678702.pem in /etc/ssl/certs
	I1123 10:18:56.597076  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:18:56.604613  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:56.621377  394315 start.go:296] duration metric: took 145.698257ms for postStartSetup
	I1123 10:18:56.621453  394315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:18:56.621507  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.639509  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.736903  394315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:18:56.741519  394315 fix.go:56] duration metric: took 4.633176884s for fixHost
	I1123 10:18:56.741547  394315 start.go:83] releasing machines lock for "newest-cni-956615", held for 4.633224185s
	I1123 10:18:56.741639  394315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-956615
	I1123 10:18:56.759242  394315 ssh_runner.go:195] Run: cat /version.json
	I1123 10:18:56.759292  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.759313  394315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:18:56.759380  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:56.777311  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.778060  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:56.925608  394315 ssh_runner.go:195] Run: systemctl --version
	I1123 10:18:56.933469  394315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:18:56.968444  394315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:18:56.973374  394315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:18:56.973443  394315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:18:56.981566  394315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:18:56.981589  394315 start.go:496] detecting cgroup driver to use...
	I1123 10:18:56.981627  394315 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 10:18:56.981686  394315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:18:56.995837  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:18:57.008368  394315 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:18:57.008418  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:18:57.023133  394315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:18:57.035490  394315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:18:57.115630  394315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:18:57.196692  394315 docker.go:234] disabling docker service ...
	I1123 10:18:57.196779  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:18:57.212027  394315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:18:57.224568  394315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:18:57.304246  394315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:18:57.383429  394315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:18:57.395933  394315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:18:57.410060  394315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:18:57.410151  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.419364  394315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 10:18:57.419416  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.428434  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.437359  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.446280  394315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:18:57.454724  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.463785  394315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.472508  394315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:18:57.481248  394315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:18:57.488803  394315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:18:57.496308  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:57.573983  394315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:18:57.718163  394315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:18:57.718238  394315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:18:57.722219  394315 start.go:564] Will wait 60s for crictl version
	I1123 10:18:57.722278  394315 ssh_runner.go:195] Run: which crictl
	I1123 10:18:57.726031  394315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:18:57.751027  394315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:18:57.751130  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.778633  394315 ssh_runner.go:195] Run: crio --version
	I1123 10:18:57.806895  394315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:18:57.807958  394315 cli_runner.go:164] Run: docker network inspect newest-cni-956615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:18:57.825213  394315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:18:57.829406  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:57.841175  394315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 10:18:57.842167  394315 kubeadm.go:884] updating cluster {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:18:57.842312  394315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:18:57.842362  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.874472  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.874497  394315 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:18:57.874557  394315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:18:57.899498  394315 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:18:57.899520  394315 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:18:57.899529  394315 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:18:57.899664  394315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-956615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:18:57.899753  394315 ssh_runner.go:195] Run: crio config
	I1123 10:18:57.945307  394315 cni.go:84] Creating CNI manager for ""
	I1123 10:18:57.945334  394315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:18:57.945353  394315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:18:57.945385  394315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-956615 NodeName:newest-cni-956615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:18:57.945529  394315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-956615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:18:57.945603  394315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:18:57.954040  394315 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:18:57.954111  394315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:18:57.962312  394315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:18:57.974790  394315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:18:57.987293  394315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 10:18:57.999467  394315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:18:58.003369  394315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:18:58.012965  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.094317  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:58.124328  394315 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615 for IP: 192.168.76.2
	I1123 10:18:58.124349  394315 certs.go:195] generating shared ca certs ...
	I1123 10:18:58.124370  394315 certs.go:227] acquiring lock for ca certs: {Name:mk67e8270fbc52c1335f94c5f9fad08f54ad62b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.124522  394315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key
	I1123 10:18:58.124600  394315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key
	I1123 10:18:58.124620  394315 certs.go:257] generating profile certs ...
	I1123 10:18:58.124722  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/client.key
	I1123 10:18:58.124804  394315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key.27a853cb
	I1123 10:18:58.124856  394315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key
	I1123 10:18:58.124994  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem (1338 bytes)
	W1123 10:18:58.125036  394315 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870_empty.pem, impossibly tiny 0 bytes
	I1123 10:18:58.125052  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca-key.pem (1679 bytes)
	I1123 10:18:58.125113  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/ca.pem (1082 bytes)
	I1123 10:18:58.125156  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:18:58.125191  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/certs/key.pem (1675 bytes)
	I1123 10:18:58.125250  394315 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem (1708 bytes)
	I1123 10:18:58.125897  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:18:58.144169  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:18:58.162839  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:18:58.181511  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 10:18:58.206364  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:18:58.224546  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:18:58.241212  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:18:58.257774  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/newest-cni-956615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:18:58.274527  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:18:58.291570  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/certs/67870.pem --> /usr/share/ca-certificates/67870.pem (1338 bytes)
	I1123 10:18:58.309143  394315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/ssl/certs/678702.pem --> /usr/share/ca-certificates/678702.pem (1708 bytes)
	I1123 10:18:58.327593  394315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:18:58.340335  394315 ssh_runner.go:195] Run: openssl version
	I1123 10:18:58.346917  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:18:58.355590  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359305  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:23 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.359346  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:18:58.394024  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:18:58.402117  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67870.pem && ln -fs /usr/share/ca-certificates/67870.pem /etc/ssl/certs/67870.pem"
	I1123 10:18:58.410347  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.413983  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:28 /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.414033  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67870.pem
	I1123 10:18:58.447559  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67870.pem /etc/ssl/certs/51391683.0"
	I1123 10:18:58.455430  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678702.pem && ln -fs /usr/share/ca-certificates/678702.pem /etc/ssl/certs/678702.pem"
	I1123 10:18:58.463887  394315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467518  394315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:28 /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.467569  394315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678702.pem
	I1123 10:18:58.502214  394315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678702.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:18:58.510610  394315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:18:58.514564  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:18:58.548572  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:18:58.582475  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:18:58.617633  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:18:58.663551  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:18:58.706433  394315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:18:58.755355  394315 kubeadm.go:401] StartCluster: {Name:newest-cni-956615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-956615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:18:58.755458  394315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:18:58.755534  394315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:18:58.792290  394315 cri.go:89] found id: "568d4e2e13f794ad02a27313df00fc828eacd24d6ea3ba4e30c0855507078458"
	I1123 10:18:58.792321  394315 cri.go:89] found id: "cc3d50e3b18ae83441894d5866b2ff39bc525a005f871ba93a8d151eef685e8f"
	I1123 10:18:58.792327  394315 cri.go:89] found id: "ab7965c57730d7f61bd3cc6d5b19e95f55562ca947a390e4616eeb716906b8a0"
	I1123 10:18:58.792332  394315 cri.go:89] found id: "3e6bea1c7000431f1f92160966ebdcb4353c6a869289c185164951c1370b9403"
	I1123 10:18:58.792336  394315 cri.go:89] found id: ""
	I1123 10:18:58.792387  394315 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:18:58.806842  394315 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:18:58.806912  394315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:18:58.815260  394315 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:18:58.815280  394315 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:18:58.815325  394315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:18:58.822691  394315 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:18:58.823363  394315 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-956615" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.823632  394315 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-64343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-956615" cluster setting kubeconfig missing "newest-cni-956615" context setting]
	I1123 10:18:58.824148  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.825412  394315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:18:58.833345  394315 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:18:58.833374  394315 kubeadm.go:602] duration metric: took 18.088164ms to restartPrimaryControlPlane
	I1123 10:18:58.833384  394315 kubeadm.go:403] duration metric: took 78.041992ms to StartCluster
	I1123 10:18:58.833401  394315 settings.go:142] acquiring lock: {Name:mk59dd1f2cda25209e70d86e9b0f1980a8c48b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.833464  394315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:18:58.834283  394315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/kubeconfig: {Name:mk8b64b4fc56d0d96d9d3d9fc407ea836f43954a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:18:58.834490  394315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:18:58.834556  394315 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:18:58.834673  394315 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-956615"
	I1123 10:18:58.834693  394315 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-956615"
	W1123 10:18:58.834705  394315 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:18:58.834716  394315 config.go:182] Loaded profile config "newest-cni-956615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:18:58.834736  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.834733  394315 addons.go:70] Setting default-storageclass=true in profile "newest-cni-956615"
	I1123 10:18:58.834767  394315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-956615"
	I1123 10:18:58.834749  394315 addons.go:70] Setting dashboard=true in profile "newest-cni-956615"
	I1123 10:18:58.834803  394315 addons.go:239] Setting addon dashboard=true in "newest-cni-956615"
	W1123 10:18:58.834825  394315 addons.go:248] addon dashboard should already be in state true
	I1123 10:18:58.834866  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.835064  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835255  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.835473  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.838196  394315 out.go:179] * Verifying Kubernetes components...
	I1123 10:18:58.839321  394315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:18:58.862172  394315 addons.go:239] Setting addon default-storageclass=true in "newest-cni-956615"
	W1123 10:18:58.862197  394315 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:18:58.862226  394315 host.go:66] Checking if "newest-cni-956615" exists ...
	I1123 10:18:58.862714  394315 cli_runner.go:164] Run: docker container inspect newest-cni-956615 --format={{.State.Status}}
	I1123 10:18:58.863432  394315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:18:58.863504  394315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:18:58.864523  394315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:58.864548  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:18:58.864608  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.865756  394315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 10:18:55.341834  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:18:57.342845  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:18:58.866799  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:18:58.866823  394315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:18:58.866899  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.896558  394315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:58.896587  394315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:18:58.896649  394315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-956615
	I1123 10:18:58.902289  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.906565  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:58.921464  394315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/newest-cni-956615/id_rsa Username:docker}
	I1123 10:18:59.001978  394315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:18:59.018766  394315 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:18:59.018846  394315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:18:59.020845  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:18:59.020869  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:18:59.027142  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:18:59.034971  394315 api_server.go:72] duration metric: took 200.448073ms to wait for apiserver process to appear ...
	I1123 10:18:59.035003  394315 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:18:59.035026  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:18:59.037406  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:18:59.037477  394315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:18:59.039283  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:18:59.053902  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:18:59.053928  394315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:18:59.070168  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:18:59.070193  394315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:18:59.086290  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:18:59.086317  394315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:18:59.103619  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:18:59.103647  394315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:18:59.116917  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:18:59.116941  394315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:18:59.129744  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:18:59.129770  394315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:18:59.142130  394315 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:18:59.142153  394315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:18:59.154836  394315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:19:00.310954  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.310988  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.311005  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.343552  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:19:00.343631  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:19:00.535410  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:00.541409  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:00.541448  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:00.873844  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.846667586s)
	I1123 10:19:00.873914  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.834599768s)
	I1123 10:19:00.874012  394315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.71914196s)
	I1123 10:19:00.875636  394315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-956615 addons enable metrics-server
	
	I1123 10:19:00.885104  394315 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:19:00.886167  394315 addons.go:530] duration metric: took 2.051621498s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:19:01.035140  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.039364  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:19:01.039396  394315 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:19:01.535682  394315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:19:01.540794  394315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:19:01.541893  394315 api_server.go:141] control plane version: v1.34.1
	I1123 10:19:01.541921  394315 api_server.go:131] duration metric: took 2.506910717s to wait for apiserver health ...
	I1123 10:19:01.541930  394315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:19:01.545766  394315 system_pods.go:59] 8 kube-system pods found
	I1123 10:19:01.545807  394315 system_pods.go:61] "coredns-66bc5c9577-f5fbv" [a2a6f660-7d27-4ea8-b5b3-af124330c296] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545816  394315 system_pods.go:61] "etcd-newest-cni-956615" [f8a39510-5fa3-42e6-a37e-6ceb4ff74876] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:19:01.545824  394315 system_pods.go:61] "kindnet-pfcv2" [5b3ef87c-1b75-4bb7-bafc-049f36caebc5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 10:19:01.545831  394315 system_pods.go:61] "kube-apiserver-newest-cni-956615" [05c7eaaf-a379-4c0e-b15e-b4fd9b251e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:19:01.545842  394315 system_pods.go:61] "kube-controller-manager-newest-cni-956615" [9a577ee2-bcae-49ed-a341-0361d8b3e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:19:01.545848  394315 system_pods.go:61] "kube-proxy-ktlnh" [ca7b0e9b-f2f8-4b3f-92d0-691144b655a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:19:01.545856  394315 system_pods.go:61] "kube-scheduler-newest-cni-956615" [4eb905ef-9079-49bf-97cf-87d904882001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:19:01.545861  394315 system_pods.go:61] "storage-provisioner" [3cdc36f3-a1eb-45d6-9e02-f2c0514c2888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:19:01.545870  394315 system_pods.go:74] duration metric: took 3.934068ms to wait for pod list to return data ...
	I1123 10:19:01.545877  394315 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:19:01.548629  394315 default_sa.go:45] found service account: "default"
	I1123 10:19:01.548653  394315 default_sa.go:55] duration metric: took 2.766657ms for default service account to be created ...
	I1123 10:19:01.548665  394315 kubeadm.go:587] duration metric: took 2.714149617s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:19:01.548682  394315 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:19:01.551434  394315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 10:19:01.551456  394315 node_conditions.go:123] node cpu capacity is 8
	I1123 10:19:01.551477  394315 node_conditions.go:105] duration metric: took 2.79002ms to run NodePressure ...
	I1123 10:19:01.551492  394315 start.go:242] waiting for startup goroutines ...
	I1123 10:19:01.551505  394315 start.go:247] waiting for cluster config update ...
	I1123 10:19:01.551523  394315 start.go:256] writing updated cluster config ...
	I1123 10:19:01.551766  394315 ssh_runner.go:195] Run: rm -f paused
	I1123 10:19:01.602233  394315 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:01.604242  394315 out.go:179] * Done! kubectl is now configured to use "newest-cni-956615" cluster and "default" namespace by default
	W1123 10:18:59.842677  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:02.343352  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:04.842853  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:07.342852  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	W1123 10:19:09.842552  390057 pod_ready.go:104] pod "coredns-66bc5c9577-c5c4c" is not "Ready", error: <nil>
	I1123 10:19:10.842708  390057 pod_ready.go:94] pod "coredns-66bc5c9577-c5c4c" is "Ready"
	I1123 10:19:10.842747  390057 pod_ready.go:86] duration metric: took 31.505776555s for pod "coredns-66bc5c9577-c5c4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.845112  390057 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.849055  390057 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:10.849104  390057 pod_ready.go:86] duration metric: took 3.94958ms for pod "etcd-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.850833  390057 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.854413  390057 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:10.854433  390057 pod_ready.go:86] duration metric: took 3.576307ms for pod "kube-apiserver-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:10.856388  390057 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.040127  390057 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:11.040157  390057 pod_ready.go:86] duration metric: took 183.748035ms for pod "kube-controller-manager-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.241461  390057 pod_ready.go:83] waiting for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.640683  390057 pod_ready.go:94] pod "kube-proxy-xfghg" is "Ready"
	I1123 10:19:11.640712  390057 pod_ready.go:86] duration metric: took 399.222419ms for pod "kube-proxy-xfghg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:11.840965  390057 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:12.241129  390057 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772252" is "Ready"
	I1123 10:19:12.241162  390057 pod_ready.go:86] duration metric: took 400.165755ms for pod "kube-scheduler-default-k8s-diff-port-772252" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:19:12.241178  390057 pod_ready.go:40] duration metric: took 32.907281835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:19:12.282816  390057 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 10:19:12.284590  390057 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772252" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.722883481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.766982162Z" level=info msg="Created container ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004: kube-system/storage-provisioner/storage-provisioner" id=113afcbe-1472-4699-becc-8d2d14ca3a55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.76766652Z" level=info msg="Starting container: ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004" id=a359a624-e278-4225-9abc-00962e6a02e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:19:09 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:09.769565297Z" level=info msg="Started container" PID=1721 containerID=ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004 description=kube-system/storage-provisioner/storage-provisioner id=a359a624-e278-4225-9abc-00962e6a02e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ae471e7487a3430bd5e3fad5a62006097bbbb17b421be975b989f371ee3414b
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.329854212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.334130894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.33416417Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.334190483Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337665643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337689386Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.337704531Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341206103Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341232168Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.341249573Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344430934Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344450941Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.344472842Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347758298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347779628Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.347793938Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351071478Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351122633Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.351145315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.354411156Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:19:19 default-k8s-diff-port-772252 crio[566]: time="2025-11-23T10:19:19.354434299Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ecfb1f7191713       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   6ae471e7487a3       storage-provisioner                                    kube-system
	7c3d5b52c5c83       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   60d293fad6dbf       dashboard-metrics-scraper-6ffb444bf9-4jppx             kubernetes-dashboard
	d1e84d4b33a1e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   78989dc172a98       kubernetes-dashboard-855c9754f9-cbx67                  kubernetes-dashboard
	3685dba7ef10e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   93514dc45e27f       busybox                                                default
	f06dad898472c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   a911f6d1afda9       coredns-66bc5c9577-c5c4c                               kube-system
	6aaac7d5aab2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   6ae471e7487a3       storage-provisioner                                    kube-system
	d43792ab06a60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   5bfd5807a255d       kindnet-4dnjf                                          kube-system
	245e87d8d135a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   d3919e0a158a9       kube-proxy-xfghg                                       kube-system
	ca0b7481c92ff       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   40a2ff1788463       kube-controller-manager-default-k8s-diff-port-772252   kube-system
	7a142a8a31476       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   4a48754e5b95d       etcd-default-k8s-diff-port-772252                      kube-system
	a176b6c574c4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   2611119415e85       kube-scheduler-default-k8s-diff-port-772252            kube-system
	7db7bd227bf9f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   3760be18c1f1e       kube-apiserver-default-k8s-diff-port-772252            kube-system
	
	
	==> coredns [f06dad898472c6e7ed3a85518f155634c223f818408388a6b6fff1ecce478bc4] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36393 - 42019 "HINFO IN 3419469779534895088.3506069302199184689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032435928s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-772252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-772252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_17_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772252
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:19:08 +0000   Sun, 23 Nov 2025 10:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-772252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                752b5ee7-1a37-4c91-8868-54a0bdb64fb2
	  Boot ID:                    37682299-5e60-467e-85b2-43c912a4056e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-c5c4c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-772252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-4dnjf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-xfghg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4jppx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cbx67                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node default-k8s-diff-port-772252 event: Registered Node default-k8s-diff-port-772252 in Controller
	  Normal  NodeReady                94s                  kubelet          Node default-k8s-diff-port-772252 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-772252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node default-k8s-diff-port-772252 event: Registered Node default-k8s-diff-port-772252 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[ +16.383752] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 09:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 8d 1b 22 86 40 82 a4 20 0b b4 9c 08 00
	[Nov23 10:14] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[Nov23 10:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 16 63 a6 3b 7c 08 06
	[  +0.000421] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e f8 56 88 48 d7 08 06
	[  +0.082350] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 6d 17 98 af e9 08 06
	[  +0.000334] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 9a 6f 0e 9e ca 08 06
	[ +24.687881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 3c b3 56 e6 32 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da b2 25 9e f0 5d 08 06
	[Nov23 10:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	[ +42.472302] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 bc be 6d 36 b3 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e dd 9f 94 dc 50 08 06
	
	
	==> etcd [7a142a8a31476f2dae05bfa267e6bed44ff2ff202efa2cb9c52dce5a34c9cb88] <==
	{"level":"warn","ts":"2025-11-23T10:18:37.140236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.146449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.153792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.160222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.166390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.172523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.179304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.190268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.197016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.204299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.213432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.220735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.228939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.235962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.242947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.250589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.257011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.264035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.270938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.285244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.291400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.298115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:37.354296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:18:50.985191Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.921054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-c5c4c\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-11-23T10:18:50.985317Z","caller":"traceutil/trace.go:172","msg":"trace[184269341] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-c5c4c; range_end:; response_count:1; response_revision:630; }","duration":"146.079682ms","start":"2025-11-23T10:18:50.839216Z","end":"2025-11-23T10:18:50.985296Z","steps":["trace[184269341] 'range keys from in-memory index tree'  (duration: 145.646886ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:19:29 up  3:01,  0 user,  load average: 2.68, 4.40, 2.94
	Linux default-k8s-diff-port-772252 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d43792ab06a602698a2e5d811ffc178fcc156441aa702f132a1a4a324793f51c] <==
	I1123 10:18:39.123404       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:18:39.123672       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 10:18:39.123831       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:18:39.123851       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:18:39.123873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:18:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:18:39.324378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:18:39.324452       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:18:39.324464       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:18:39.324628       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:19:09.325207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:19:09.325214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:19:09.325212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:19:09.325212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 10:19:10.924988       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:19:10.925013       1 metrics.go:72] Registering metrics
	I1123 10:19:10.925053       1 controller.go:711] "Syncing nftables rules"
	I1123 10:19:19.329546       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 10:19:19.329586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7db7bd227bf9ff6dab49de87c436200ac4ce2681564d93007f27e8429ac58b29] <==
	I1123 10:18:37.822704       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:18:37.822728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:18:37.823576       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:18:37.823596       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:18:37.823602       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:18:37.823609       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:18:37.822143       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:18:37.822224       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1123 10:18:37.829792       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:18:37.832047       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:18:37.862953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:18:37.877847       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:18:37.907524       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:18:38.114713       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:18:38.141271       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:18:38.160602       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:18:38.167250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:18:38.173351       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:18:38.204644       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.118.3"}
	I1123 10:18:38.213191       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.125.107"}
	I1123 10:18:38.724559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:18:41.201327       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:18:41.201384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:18:41.553436       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:18:41.751467       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ca0b7481c92ffd4b2bbdda49cb03c9b00d30df31c6dab4f9e33326e98ce4ab98] <==
	I1123 10:18:41.169974       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:18:41.172225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:18:41.174452       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:18:41.176650       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:18:41.179243       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:18:41.183058       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:18:41.197534       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:18:41.197558       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:18:41.197592       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:18:41.197680       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:18:41.197761       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:18:41.199007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:18:41.199036       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:18:41.199063       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:18:41.199158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:18:41.199166       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:18:41.199188       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:18:41.199786       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:18:41.200490       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:18:41.200517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:18:41.201652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:18:41.201659       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:18:41.203276       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:18:41.205130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:18:41.213614       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [245e87d8d135aee2d7da0358a8becc82fe70154db598981be707ef69925970f0] <==
	I1123 10:18:38.982884       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:18:39.056229       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:18:39.156824       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:18:39.156867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 10:18:39.157002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:18:39.187356       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:18:39.187447       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:18:39.192776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:18:39.193224       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:18:39.193282       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:18:39.195033       1 config.go:200] "Starting service config controller"
	I1123 10:18:39.195082       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:18:39.195151       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:18:39.195163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:18:39.195161       1 config.go:309] "Starting node config controller"
	I1123 10:18:39.195179       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:18:39.195184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:18:39.195180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:18:39.295692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:18:39.295732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:18:39.295730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:18:39.295766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a176b6c574c4db89ccebca8123845fafee7b14ca1a0baae180f32d747de3393a] <==
	I1123 10:18:37.778294       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:18:37.781195       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:18:37.781290       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:18:37.782372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:18:37.782463       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:18:37.784892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 10:18:37.784698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:18:37.791364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:18:37.791388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:18:37.796752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:18:37.797216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:18:37.797417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:18:37.797417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:18:37.797788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:18:37.798065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:18:37.798111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:18:37.798155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:18:37.798286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:18:37.800376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:18:37.801839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:18:37.801957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:18:37.802219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:18:37.802422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:18:37.808320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 10:18:39.081940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779779     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a366b58-3166-4114-bd99-9b1dd0648311-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cbx67\" (UID: \"1a366b58-3166-4114-bd99-9b1dd0648311\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779822     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgf7d\" (UniqueName: \"kubernetes.io/projected/1a366b58-3166-4114-bd99-9b1dd0648311-kube-api-access-tgf7d\") pod \"kubernetes-dashboard-855c9754f9-cbx67\" (UID: \"1a366b58-3166-4114-bd99-9b1dd0648311\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779842     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52bx\" (UniqueName: \"kubernetes.io/projected/57299988-81da-4cbe-b187-b18dcc5efda2-kube-api-access-x52bx\") pod \"dashboard-metrics-scraper-6ffb444bf9-4jppx\" (UID: \"57299988-81da-4cbe-b187-b18dcc5efda2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx"
	Nov 23 10:18:41 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:41.779862     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/57299988-81da-4cbe-b187-b18dcc5efda2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4jppx\" (UID: \"57299988-81da-4cbe-b187-b18dcc5efda2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx"
	Nov 23 10:18:46 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:46.656516     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cbx67" podStartSLOduration=1.901573805 podStartE2EDuration="5.656492237s" podCreationTimestamp="2025-11-23 10:18:41 +0000 UTC" firstStartedPulling="2025-11-23 10:18:42.001306075 +0000 UTC m=+6.514469531" lastFinishedPulling="2025-11-23 10:18:45.756224517 +0000 UTC m=+10.269387963" observedRunningTime="2025-11-23 10:18:46.656430407 +0000 UTC m=+11.169593871" watchObservedRunningTime="2025-11-23 10:18:46.656492237 +0000 UTC m=+11.169655701"
	Nov 23 10:18:48 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:48.650338     731 scope.go:117] "RemoveContainer" containerID="7ae9105073053f5c93ef114fdbc842989f5ec3e066b1bf8f9adef906a76cd6e8"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:49.654555     731 scope.go:117] "RemoveContainer" containerID="7ae9105073053f5c93ef114fdbc842989f5ec3e066b1bf8f9adef906a76cd6e8"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:49.654743     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:49 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:49.654974     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:18:50 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:50.659212     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:50 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:50.659368     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:18:52 default-k8s-diff-port-772252 kubelet[731]: I1123 10:18:52.739119     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:18:52 default-k8s-diff-port-772252 kubelet[731]: E1123 10:18:52.739334     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.593786     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.698114     731 scope.go:117] "RemoveContainer" containerID="0a5881fad61127a6e370d27a94cde49b6581f3c43f826720913df5990fcc9a84"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:04.698323     731 scope.go:117] "RemoveContainer" containerID="7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	Nov 23 10:19:04 default-k8s-diff-port-772252 kubelet[731]: E1123 10:19:04.698519     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:09 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:09.714547     731 scope.go:117] "RemoveContainer" containerID="6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6"
	Nov 23 10:19:12 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:12.739491     731 scope.go:117] "RemoveContainer" containerID="7c3d5b52c5c83de3ca67ce90bb05bdd0ceb08abe56ed1f6ae756cc422b40a7a5"
	Nov 23 10:19:12 default-k8s-diff-port-772252 kubelet[731]: E1123 10:19:12.739718     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4jppx_kubernetes-dashboard(57299988-81da-4cbe-b187-b18dcc5efda2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4jppx" podUID="57299988-81da-4cbe-b187-b18dcc5efda2"
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:19:24 default-k8s-diff-port-772252 kubelet[731]: I1123 10:19:24.333051     731 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 10:19:24 default-k8s-diff-port-772252 systemd[1]: kubelet.service: Consumed 1.531s CPU time.
	
	
	==> kubernetes-dashboard [d1e84d4b33a1e182b32a2df434b3eb1086c1002fcd0c9d64f056f4a58c281c75] <==
	2025/11/23 10:18:45 Starting overwatch
	2025/11/23 10:18:45 Using namespace: kubernetes-dashboard
	2025/11/23 10:18:45 Using in-cluster config to connect to apiserver
	2025/11/23 10:18:45 Using secret token for csrf signing
	2025/11/23 10:18:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:18:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:18:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:18:45 Generating JWE encryption key
	2025/11/23 10:18:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:18:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:18:45 Initializing JWE encryption key from synchronized object
	2025/11/23 10:18:45 Creating in-cluster Sidecar client
	2025/11/23 10:18:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:18:45 Serving insecurely on HTTP port: 9090
	2025/11/23 10:19:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6aaac7d5aab2fdbe3b38a918864ef4d8be7510c3bdc381a0f0c2f96fa7f330d6] <==
	I1123 10:18:38.956817       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:19:08.958998       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ecfb1f7191713b2b7e08f8913c6ed3071ab3fd46d99823ee5dfef933d862b004] <==
	I1123 10:19:09.782332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:19:09.789498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:19:09.789560       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:19:09.791715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:13.246558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:17.506830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:21.105820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:24.159265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.181930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.186149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:19:27.186325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:19:27.186461       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed91b9a4-76da-498a-b1ac-8ef14ef3f49c", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71 became leader
	I1123 10:19:27.186503       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71!
	W1123 10:19:27.188436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:27.191590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:19:27.286813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772252_b60c49f7-a433-4b77-96cd-c9a56d54eb71!
	W1123 10:19:29.195244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:19:29.199112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252: exit status 2 (340.160292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.93s)

                                                
                                    

Test pass (263/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 31.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 14.49
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.87
22 TestOffline 55.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 131.67
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.42
48 TestAddons/StoppedEnableDisable 18.55
49 TestCertOptions 29.26
50 TestCertExpiration 214.4
52 TestForceSystemdFlag 29.04
53 TestForceSystemdEnv 29.26
58 TestErrorSpam/setup 19.37
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 6.39
62 TestErrorSpam/unpause 6.06
63 TestErrorSpam/stop 2.6
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 66.67
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.2
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.7
75 TestFunctional/serial/CacheCmd/cache/add_local 1.94
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 62.87
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 3.6
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 6.82
91 TestFunctional/parallel/DryRun 0.53
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 0.95
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 54.07
101 TestFunctional/parallel/SSHCmd 0.66
102 TestFunctional/parallel/CpCmd 1.81
103 TestFunctional/parallel/MySQL 18.32
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.84
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
113 TestFunctional/parallel/License 0.56
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 44.21
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
127 TestFunctional/parallel/ProfileCmd/profile_list 0.4
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 8.07
130 TestFunctional/parallel/MountCmd/specific-port 1.95
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 0.52
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.83
142 TestFunctional/parallel/ImageCommands/Setup 1.98
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 145.26
163 TestMultiControlPlane/serial/DeployApp 5.65
164 TestMultiControlPlane/serial/PingHostFromPods 1.02
165 TestMultiControlPlane/serial/AddWorkerNode 27.51
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 17.14
169 TestMultiControlPlane/serial/StopSecondaryNode 13.78
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.83
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.54
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.65
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 42.95
177 TestMultiControlPlane/serial/RestartCluster 58.51
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 71.29
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 69.32
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.99
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 35.09
211 TestKicCustomNetwork/use_default_bridge_network 22.64
212 TestKicExistingNetwork 25.92
213 TestKicCustomSubnet 23.25
214 TestKicStaticIP 27.42
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 52.55
219 TestMountStart/serial/StartWithMountFirst 4.8
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.75
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.88
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 63.36
231 TestMultiNode/serial/DeployApp2Nodes 4.25
232 TestMultiNode/serial/PingHostFrom2Pods 0.72
233 TestMultiNode/serial/AddNode 24.07
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.84
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.18
239 TestMultiNode/serial/RestartKeepsNodes 79.35
240 TestMultiNode/serial/DeleteNode 5.24
241 TestMultiNode/serial/StopMultiNode 30.27
242 TestMultiNode/serial/RestartMultiNode 50.71
243 TestMultiNode/serial/ValidateNameConflict 23.08
250 TestScheduledStopUnix 97.34
253 TestInsufficientStorage 12.27
254 TestRunningBinaryUpgrade 49.05
256 TestKubernetesUpgrade 313.09
257 TestMissingContainerUpgrade 135.79
259 TestPause/serial/Start 49.58
260 TestPause/serial/SecondStartNoReconfiguration 9.06
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 23.14
272 TestNetworkPlugins/group/false 3.69
273 TestNoKubernetes/serial/StartWithStopK8s 16.22
277 TestNoKubernetes/serial/Start 4.16
278 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
280 TestNoKubernetes/serial/ProfileList 19.7
281 TestNoKubernetes/serial/Stop 1.3
282 TestNoKubernetes/serial/StartNoArgs 7.58
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
284 TestStoppedBinaryUpgrade/Setup 3.1
285 TestStoppedBinaryUpgrade/Upgrade 38.58
293 TestNetworkPlugins/group/auto/Start 45.08
294 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
295 TestNetworkPlugins/group/kindnet/Start 42.11
296 TestNetworkPlugins/group/auto/KubeletFlags 0.32
297 TestNetworkPlugins/group/auto/NetCatPod 8.2
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/auto/DNS 0.1
300 TestNetworkPlugins/group/auto/Localhost 0.08
301 TestNetworkPlugins/group/auto/HairPin 0.08
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
303 TestNetworkPlugins/group/kindnet/NetCatPod 8.18
304 TestNetworkPlugins/group/kindnet/DNS 0.14
305 TestNetworkPlugins/group/kindnet/Localhost 0.09
306 TestNetworkPlugins/group/kindnet/HairPin 0.1
307 TestNetworkPlugins/group/calico/Start 51.74
308 TestNetworkPlugins/group/custom-flannel/Start 52.42
309 TestNetworkPlugins/group/enable-default-cni/Start 39.77
310 TestNetworkPlugins/group/flannel/Start 56.04
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.36
313 TestNetworkPlugins/group/calico/NetCatPod 9.24
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.31
318 TestNetworkPlugins/group/calico/DNS 0.11
319 TestNetworkPlugins/group/calico/Localhost 0.09
320 TestNetworkPlugins/group/calico/HairPin 0.09
321 TestNetworkPlugins/group/custom-flannel/DNS 0.11
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
329 TestNetworkPlugins/group/flannel/NetCatPod 8.26
330 TestNetworkPlugins/group/bridge/Start 67.84
332 TestStartStop/group/old-k8s-version/serial/FirstStart 51.97
334 TestStartStop/group/no-preload/serial/FirstStart 57.5
335 TestNetworkPlugins/group/flannel/DNS 0.15
336 TestNetworkPlugins/group/flannel/Localhost 0.11
337 TestNetworkPlugins/group/flannel/HairPin 0.12
339 TestStartStop/group/embed-certs/serial/FirstStart 40.4
340 TestStartStop/group/old-k8s-version/serial/DeployApp 9.28
341 TestStartStop/group/no-preload/serial/DeployApp 10.21
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
343 TestNetworkPlugins/group/bridge/NetCatPod 8.17
345 TestStartStop/group/old-k8s-version/serial/Stop 16.08
346 TestStartStop/group/embed-certs/serial/DeployApp 9.22
348 TestNetworkPlugins/group/bridge/DNS 0.13
349 TestNetworkPlugins/group/bridge/Localhost 0.1
350 TestNetworkPlugins/group/bridge/HairPin 0.11
351 TestStartStop/group/no-preload/serial/Stop 18.67
353 TestStartStop/group/embed-certs/serial/Stop 18.13
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
355 TestStartStop/group/old-k8s-version/serial/SecondStart 52.17
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/no-preload/serial/SecondStart 49.52
359 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.21
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
361 TestStartStop/group/embed-certs/serial/SecondStart 45.17
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.23
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.17
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/FirstStart 25.22
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.66
381 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/Stop 8.47
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 10.12
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (31.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-734762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-734762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (31.63572437s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (31.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 09:22:24.837955   67870 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 09:22:24.838054   67870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-734762
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-734762: exit status 85 (73.463013ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-734762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-734762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:21 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:21:53
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:21:53.255193   67882 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:21:53.255292   67882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:21:53.255300   67882 out.go:374] Setting ErrFile to fd 2...
	I1123 09:21:53.255304   67882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:21:53.255496   67882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	W1123 09:21:53.255635   67882 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21968-64343/.minikube/config/config.json: open /home/jenkins/minikube-integration/21968-64343/.minikube/config/config.json: no such file or directory
	I1123 09:21:53.256122   67882 out.go:368] Setting JSON to true
	I1123 09:21:53.256965   67882 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7454,"bootTime":1763882259,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:21:53.257019   67882 start.go:143] virtualization: kvm guest
	I1123 09:21:53.260818   67882 out.go:99] [download-only-734762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:21:53.260961   67882 notify.go:221] Checking for updates...
	W1123 09:21:53.261005   67882 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 09:21:53.261994   67882 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:21:53.262925   67882 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:21:53.263935   67882 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:21:53.264862   67882 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:21:53.265913   67882 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:21:53.268488   67882 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:21:53.268716   67882 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:21:53.292118   67882 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:21:53.292236   67882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:21:53.632689   67882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 09:21:53.622361379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:21:53.632810   67882 docker.go:319] overlay module found
	I1123 09:21:53.634256   67882 out.go:99] Using the docker driver based on user configuration
	I1123 09:21:53.634289   67882 start.go:309] selected driver: docker
	I1123 09:21:53.634298   67882 start.go:927] validating driver "docker" against <nil>
	I1123 09:21:53.634383   67882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:21:53.690211   67882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 09:21:53.680762521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:21:53.690387   67882 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:21:53.690864   67882 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 09:21:53.691017   67882 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:21:53.692561   67882 out.go:171] Using Docker driver with root privileges
	I1123 09:21:53.693498   67882 cni.go:84] Creating CNI manager for ""
	I1123 09:21:53.693559   67882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:21:53.693571   67882 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:21:53.693647   67882 start.go:353] cluster config:
	{Name:download-only-734762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-734762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:21:53.694576   67882 out.go:99] Starting "download-only-734762" primary control-plane node in "download-only-734762" cluster
	I1123 09:21:53.694591   67882 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:21:53.695463   67882 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:21:53.695526   67882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 09:21:53.695656   67882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:21:53.711682   67882 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:21:53.711880   67882 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:21:53.711987   67882 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:21:53.799722   67882 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 09:21:53.799755   67882 cache.go:65] Caching tarball of preloaded images
	I1123 09:21:53.799960   67882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 09:21:53.801560   67882 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 09:21:53.801580   67882 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 09:21:53.910277   67882 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1123 09:21:53.910394   67882 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 09:22:07.621632   67882 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 09:22:07.622021   67882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/download-only-734762/config.json ...
	I1123 09:22:07.622066   67882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/download-only-734762/config.json: {Name:mk011c9be53870e4afa9659b074029c4a5b6b8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:22:07.622338   67882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 09:22:07.622551   67882 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21968-64343/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-734762 host does not exist
	  To start a cluster, run: "minikube start -p download-only-734762"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-734762
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (14.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-581985 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-581985 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.487144519s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (14.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 09:22:39.756223   67870 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 09:22:39.756270   67870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-581985
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-581985: exit status 85 (70.158843ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-734762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-734762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-734762                                                                                                                                                   │ download-only-734762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ start   │ -o=json --download-only -p download-only-581985 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-581985 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:22:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:22:25.320452   68323 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:22:25.320683   68323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:25.320691   68323 out.go:374] Setting ErrFile to fd 2...
	I1123 09:22:25.320695   68323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:25.320891   68323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:22:25.321343   68323 out.go:368] Setting JSON to true
	I1123 09:22:25.322140   68323 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7486,"bootTime":1763882259,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:22:25.322195   68323 start.go:143] virtualization: kvm guest
	I1123 09:22:25.323756   68323 out.go:99] [download-only-581985] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:22:25.323881   68323 notify.go:221] Checking for updates...
	I1123 09:22:25.325075   68323 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:22:25.326429   68323 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:22:25.327562   68323 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:22:25.328496   68323 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:22:25.329429   68323 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:22:25.331191   68323 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:22:25.331401   68323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:22:25.352732   68323 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:22:25.352871   68323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:25.408894   68323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:25.399339063 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:25.409059   68323 docker.go:319] overlay module found
	I1123 09:22:25.410595   68323 out.go:99] Using the docker driver based on user configuration
	I1123 09:22:25.410624   68323 start.go:309] selected driver: docker
	I1123 09:22:25.410632   68323 start.go:927] validating driver "docker" against <nil>
	I1123 09:22:25.410727   68323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:25.468304   68323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-23 09:22:25.459823812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:22:25.468476   68323 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:22:25.468938   68323 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 09:22:25.469074   68323 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:22:25.470674   68323 out.go:171] Using Docker driver with root privileges
	I1123 09:22:25.471712   68323 cni.go:84] Creating CNI manager for ""
	I1123 09:22:25.471780   68323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:22:25.471793   68323 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:22:25.471853   68323 start.go:353] cluster config:
	{Name:download-only-581985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-581985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:22:25.472822   68323 out.go:99] Starting "download-only-581985" primary control-plane node in "download-only-581985" cluster
	I1123 09:22:25.472835   68323 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:22:25.473758   68323 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:22:25.473789   68323 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:25.473880   68323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:22:25.489508   68323 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:22:25.489640   68323 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:22:25.489654   68323 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 09:22:25.489658   68323 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 09:22:25.489665   68323 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 09:22:25.902337   68323 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:22:25.902370   68323 cache.go:65] Caching tarball of preloaded images
	I1123 09:22:25.902557   68323 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:22:25.904111   68323 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 09:22:25.904135   68323 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 09:22:26.012131   68323 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1123 09:22:26.012186   68323 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21968-64343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-581985 host does not exist
	  To start a cluster, run: "minikube start -p download-only-581985"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-581985
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-707806 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-707806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-707806
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 09:22:40.855580   67870 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-045361 --alsologtostderr --binary-mirror http://127.0.0.1:36233 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-045361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-045361
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (55.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-065092 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-065092 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (52.074437769s)
helpers_test.go:175: Cleaning up "offline-crio-065092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-065092
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-065092: (3.519464964s)
--- PASS: TestOffline (55.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-768607
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-768607: exit status 85 (65.776592ms)

                                                
                                                
-- stdout --
	* Profile "addons-768607" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-768607"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-768607
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-768607: exit status 85 (65.075826ms)

                                                
                                                
-- stdout --
	* Profile "addons-768607" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-768607"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (131.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-768607 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-768607 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.669657193s)
--- PASS: TestAddons/Setup (131.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-768607 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-768607 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-768607 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-768607 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e9dc25fe-97c5-431f-bdc9-31e095db24ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e9dc25fe-97c5-431f-bdc9-31e095db24ec] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003178355s
addons_test.go:694: (dbg) Run:  kubectl --context addons-768607 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-768607 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-768607 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-768607
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-768607: (18.271051761s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-768607
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-768607
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-768607
--- PASS: TestAddons/StoppedEnableDisable (18.55s)

                                                
                                    
x
+
TestCertOptions (29.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-774801 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-774801 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.863394994s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-774801 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-774801 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-774801 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-774801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-774801
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-774801: (4.70336364s)
--- PASS: TestCertOptions (29.26s)

                                                
                                    
x
+
TestCertExpiration (214.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-081181 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-081181 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.79212553s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-081181 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-081181 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.031066407s)
helpers_test.go:175: Cleaning up "cert-expiration-081181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-081181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-081181: (2.572428115s)
--- PASS: TestCertExpiration (214.40s)

                                                
                                    
x
+
TestForceSystemdFlag (29.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-956676 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-956676 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.143555129s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-956676 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-956676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-956676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-956676: (2.563046172s)
--- PASS: TestForceSystemdFlag (29.04s)

                                                
                                    
x
+
TestForceSystemdEnv (29.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-465707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-465707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.72753966s)
helpers_test.go:175: Cleaning up "force-systemd-env-465707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-465707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-465707: (2.532460796s)
--- PASS: TestForceSystemdEnv (29.26s)

                                                
                                    
x
+
TestErrorSpam/setup (19.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-512986 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-512986 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-512986 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-512986 --driver=docker  --container-runtime=crio: (19.373604171s)
--- PASS: TestErrorSpam/setup (19.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (6.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause: exit status 80 (2.354921734s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause: exit status 80 (2.374359854s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause: exit status 80 (1.659380267s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.06s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause: exit status 80 (1.831703389s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause: exit status 80 (1.96054663s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause: exit status 80 (2.270239327s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:28:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.06s)

                                                
                                    
x
+
TestErrorSpam/stop (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 stop: (2.393521066s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512986 --log_dir /tmp/nospam-512986 stop
--- PASS: TestErrorSpam/stop (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21968-64343/.minikube/files/etc/test/nested/copy/67870/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-157940 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.668239831s)
--- PASS: TestFunctional/serial/StartWithProxy (66.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 09:29:49.670625   67870 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --alsologtostderr -v=8
E1123 09:29:54.009807   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.016257   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.027708   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.049214   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.091044   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.173199   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.335150   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:54.656805   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:55.298288   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-157940 --alsologtostderr -v=8: (6.194064595s)
functional_test.go:678: soft start took 6.194817886s for "functional-157940" cluster.
I1123 09:29:55.865066   67870 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-157940 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache add registry.k8s.io/pause:3.1
E1123 09:29:56.579898   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-157940 /tmp/TestFunctionalserialCacheCmdcacheadd_local102042527/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache add minikube-local-cache-test:functional-157940
E1123 09:29:59.141765   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 cache add minikube-local-cache-test:functional-157940: (1.625074708s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache delete minikube-local-cache-test:functional-157940
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-157940
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.003557ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 kubectl -- --context functional-157940 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-157940 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 09:30:04.263692   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:30:14.505416   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:30:34.987298   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-157940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.873461853s)
functional_test.go:776: restart took 1m2.873604943s for "functional-157940" cluster.
I1123 09:31:05.891103   67870 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (62.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-157940 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 logs: (1.241319687s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 logs --file /tmp/TestFunctionalserialLogsFileCmd3646802494/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 logs --file /tmp/TestFunctionalserialLogsFileCmd3646802494/001/logs.txt: (1.246378406s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-157940 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-157940
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-157940: exit status 115 (352.378848ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31216 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-157940 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 config get cpus: exit status 14 (96.617583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 config get cpus: exit status 14 (82.164418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-157940 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-157940 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 107788: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-157940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (225.76183ms)

                                                
                                                
-- stdout --
	* [functional-157940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:32:12.472705  106098 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:12.472820  106098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:12.472831  106098 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:12.472838  106098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:12.473191  106098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:32:12.473815  106098 out.go:368] Setting JSON to false
	I1123 09:32:12.475112  106098 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8073,"bootTime":1763882259,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:32:12.475198  106098 start.go:143] virtualization: kvm guest
	I1123 09:32:12.477292  106098 out.go:179] * [functional-157940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:32:12.478576  106098 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:32:12.478609  106098 notify.go:221] Checking for updates...
	I1123 09:32:12.481707  106098 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:32:12.483066  106098 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:32:12.484493  106098 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:32:12.488703  106098 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:32:12.490165  106098 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:32:12.492199  106098 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:12.492991  106098 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:32:12.523571  106098 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:32:12.523700  106098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:32:12.603741  106098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 09:32:12.590551852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:32:12.603898  106098 docker.go:319] overlay module found
	I1123 09:32:12.606486  106098 out.go:179] * Using the docker driver based on existing profile
	I1123 09:32:12.607816  106098 start.go:309] selected driver: docker
	I1123 09:32:12.607834  106098 start.go:927] validating driver "docker" against &{Name:functional-157940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-157940 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:32:12.607941  106098 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:32:12.610555  106098 out.go:203] 
	W1123 09:32:12.612313  106098 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 09:32:12.613554  106098 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-157940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-157940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (218.703419ms)

                                                
                                                
-- stdout --
	* [functional-157940] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:32:13.004634  106306 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:13.004716  106306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:13.004720  106306 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:13.004724  106306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:13.005212  106306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:32:13.005782  106306 out.go:368] Setting JSON to false
	I1123 09:32:13.007100  106306 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8074,"bootTime":1763882259,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:32:13.007184  106306 start.go:143] virtualization: kvm guest
	I1123 09:32:13.010850  106306 out.go:179] * [functional-157940] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 09:32:13.012865  106306 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:32:13.012852  106306 notify.go:221] Checking for updates...
	I1123 09:32:13.014270  106306 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:32:13.016575  106306 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 09:32:13.018321  106306 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 09:32:13.019789  106306 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:32:13.021172  106306 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:32:13.023206  106306 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:13.024062  106306 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:32:13.053879  106306 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:32:13.054019  106306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:32:13.122835  106306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 09:32:13.110563043 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:32:13.122995  106306 docker.go:319] overlay module found
	I1123 09:32:13.125788  106306 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 09:32:13.127080  106306 start.go:309] selected driver: docker
	I1123 09:32:13.127123  106306 start.go:927] validating driver "docker" against &{Name:functional-157940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-157940 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:32:13.127240  106306 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:32:13.129240  106306 out.go:203] 
	W1123 09:32:13.130452  106306 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 09:32:13.131685  106306 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eaf48e96-3258-41e9-a605-ab287f0b1143] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003257355s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-157940 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-157940 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-157940 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-157940 apply -f testdata/storage-provisioner/pod.yaml
I1123 09:31:19.052877   67870 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [61caec24-738e-4f37-976b-72486c3d99ce] Pending
helpers_test.go:352: "sp-pod" [61caec24-738e-4f37-976b-72486c3d99ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [61caec24-738e-4f37-976b-72486c3d99ce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 40.00378544s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-157940 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-157940 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-157940 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f9de1c7b-480b-4006-a274-e488521549a4] Pending
helpers_test.go:352: "sp-pod" [f9de1c7b-480b-4006-a274-e488521549a4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f9de1c7b-480b-4006-a274-e488521549a4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004310929s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-157940 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh -n functional-157940 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cp functional-157940:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2643623177/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh -n functional-157940 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh -n functional-157940 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-157940 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xb9rw" [05bb349a-547b-46a2-bcec-7c06c31bf0e4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xb9rw" [05bb349a-547b-46a2-bcec-7c06c31bf0e4] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.058264854s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;": exit status 1 (105.614035ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:32:22.253937   67870 retry.go:31] will retry after 764.892586ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;": exit status 1 (84.686787ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:32:23.104064   67870 retry.go:31] will retry after 1.265114079s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;": exit status 1 (117.053663ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:32:24.486853   67870 retry.go:31] will retry after 2.661949034s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-157940 exec mysql-5bb876957f-xb9rw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/67870/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /etc/test/nested/copy/67870/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/67870.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /etc/ssl/certs/67870.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/67870.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /usr/share/ca-certificates/67870.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/678702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /etc/ssl/certs/678702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/678702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /usr/share/ca-certificates/678702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-157940 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "sudo systemctl is-active docker": exit status 1 (377.635602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "sudo systemctl is-active containerd": exit status 1 (397.360403ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 100557: os: process already finished
helpers_test.go:519: unable to terminate pid 100229: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (44.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-157940 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fbc85f5a-d493-4c14-ba58-386b9251e395] Pending
helpers_test.go:352: "nginx-svc" [fbc85f5a-d493-4c14-ba58-386b9251e395] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fbc85f5a-d493-4c14-ba58-386b9251e395] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 44.002803408s
I1123 09:31:57.177322   67870 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (44.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-157940 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.200.241 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-157940 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.464733ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.327819ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "359.022904ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.096426ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdany-port351470635/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763890319521106685" to /tmp/TestFunctionalparallelMountCmdany-port351470635/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763890319521106685" to /tmp/TestFunctionalparallelMountCmdany-port351470635/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763890319521106685" to /tmp/TestFunctionalparallelMountCmdany-port351470635/001/test-1763890319521106685
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p"
I1123 09:31:59.676647   67870 detect.go:223] nested VM detected
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.892562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:31:59.820353   67870 retry.go:31] will retry after 741.034068ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 09:31 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 09:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 09:31 test-1763890319521106685
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh cat /mount-9p/test-1763890319521106685
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-157940 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [86a09269-ad6c-4f5e-911c-b8a5f313164a] Pending
helpers_test.go:352: "busybox-mount" [86a09269-ad6c-4f5e-911c-b8a5f313164a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [86a09269-ad6c-4f5e-911c-b8a5f313164a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [86a09269-ad6c-4f5e-911c-b8a5f313164a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003166271s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-157940 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdany-port351470635/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdspecific-port900455477/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.978401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:32:07.904125   67870 retry.go:31] will retry after 528.95011ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdspecific-port900455477/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "sudo umount -f /mount-9p": exit status 1 (281.682324ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-157940 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdspecific-port900455477/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 update-context --alsologtostderr -v=2
2025/11/23 09:32:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T" /mount1: exit status 1 (349.453721ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:32:09.888033   67870 retry.go:31] will retry after 263.655967ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-157940 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-157940 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282425354/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-157940 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-157940 image ls --format short --alsologtostderr:
I1123 09:32:27.912311  108283 out.go:360] Setting OutFile to fd 1 ...
I1123 09:32:27.912652  108283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:27.912664  108283 out.go:374] Setting ErrFile to fd 2...
I1123 09:32:27.912669  108283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:27.912937  108283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
I1123 09:32:27.913831  108283 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:27.913998  108283 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:27.914669  108283 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
I1123 09:32:27.934604  108283 ssh_runner.go:195] Run: systemctl --version
I1123 09:32:27.934651  108283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
I1123 09:32:27.952841  108283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
I1123 09:32:28.052355  108283 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-157940 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-157940 image ls --format table --alsologtostderr:
I1123 09:32:28.636936  108757 out.go:360] Setting OutFile to fd 1 ...
I1123 09:32:28.637203  108757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.637212  108757 out.go:374] Setting ErrFile to fd 2...
I1123 09:32:28.637216  108757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.637432  108757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
I1123 09:32:28.638058  108757 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.638197  108757 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.638791  108757 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
I1123 09:32:28.656762  108757 ssh_runner.go:195] Run: systemctl --version
I1123 09:32:28.656813  108757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
I1123 09:32:28.675025  108757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
I1123 09:32:28.775389  108757 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-157940 image ls --format json --alsologtostderr:
[{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf
92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kuberne
tesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:711703309
36954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kind
est/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256
:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTa
gs":[],"size":"249229937"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-157940 image ls --format json --alsologtostderr:
I1123 09:32:28.393436  108609 out.go:360] Setting OutFile to fd 1 ...
I1123 09:32:28.393549  108609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.393561  108609 out.go:374] Setting ErrFile to fd 2...
I1123 09:32:28.393581  108609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.393769  108609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
I1123 09:32:28.394384  108609 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.394490  108609 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.394905  108609 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
I1123 09:32:28.415812  108609 ssh_runner.go:195] Run: systemctl --version
I1123 09:32:28.415881  108609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
I1123 09:32:28.436220  108609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
I1123 09:32:28.537228  108609 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-157940 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-157940 image ls --format yaml --alsologtostderr:
I1123 09:32:28.151625  108450 out.go:360] Setting OutFile to fd 1 ...
I1123 09:32:28.151872  108450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.151883  108450 out.go:374] Setting ErrFile to fd 2...
I1123 09:32:28.151889  108450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.152108  108450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
I1123 09:32:28.152630  108450 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.152722  108450 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.153179  108450 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
I1123 09:32:28.171496  108450 ssh_runner.go:195] Run: systemctl --version
I1123 09:32:28.171552  108450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
I1123 09:32:28.191377  108450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
I1123 09:32:28.294812  108450 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-157940 ssh pgrep buildkitd: exit status 1 (279.586416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image build -t localhost/my-image:functional-157940 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 image build -t localhost/my-image:functional-157940 testdata/build --alsologtostderr: (3.32695565s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-157940 image build -t localhost/my-image:functional-157940 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 337aeda7d3b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-157940
--> 73c99757071
Successfully tagged localhost/my-image:functional-157940
73c99757071d3c01f21e7820a52dd01f04338e9d32799b4eebd31cfdfcb171da
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-157940 image build -t localhost/my-image:functional-157940 testdata/build --alsologtostderr:
I1123 09:32:28.654527  108763 out.go:360] Setting OutFile to fd 1 ...
I1123 09:32:28.654821  108763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.654832  108763 out.go:374] Setting ErrFile to fd 2...
I1123 09:32:28.654836  108763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:32:28.655104  108763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
I1123 09:32:28.655690  108763 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.656448  108763 config.go:182] Loaded profile config "functional-157940": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:32:28.656945  108763 cli_runner.go:164] Run: docker container inspect functional-157940 --format={{.State.Status}}
I1123 09:32:28.675527  108763 ssh_runner.go:195] Run: systemctl --version
I1123 09:32:28.675586  108763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-157940
I1123 09:32:28.692555  108763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/functional-157940/id_rsa Username:docker}
I1123 09:32:28.791527  108763 build_images.go:162] Building image from path: /tmp/build.4225578096.tar
I1123 09:32:28.791600  108763 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 09:32:28.800108  108763 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4225578096.tar
I1123 09:32:28.803978  108763 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4225578096.tar: stat -c "%s %y" /var/lib/minikube/build/build.4225578096.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4225578096.tar': No such file or directory
I1123 09:32:28.804005  108763 ssh_runner.go:362] scp /tmp/build.4225578096.tar --> /var/lib/minikube/build/build.4225578096.tar (3072 bytes)
I1123 09:32:28.822875  108763 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4225578096
I1123 09:32:28.830468  108763 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4225578096 -xf /var/lib/minikube/build/build.4225578096.tar
I1123 09:32:28.838212  108763 crio.go:315] Building image: /var/lib/minikube/build/build.4225578096
I1123 09:32:28.838270  108763 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-157940 /var/lib/minikube/build/build.4225578096 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1123 09:32:31.886301  108763 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-157940 /var/lib/minikube/build/build.4225578096 --cgroup-manager=cgroupfs: (3.047998482s)
I1123 09:32:31.886386  108763 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4225578096
I1123 09:32:31.894370  108763 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4225578096.tar
I1123 09:32:31.901742  108763 build_images.go:218] Built localhost/my-image:functional-157940 from /tmp/build.4225578096.tar
I1123 09:32:31.901772  108763 build_images.go:134] succeeded building to: functional-157940
I1123 09:32:31.901785  108763 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls
E1123 09:32:37.870262   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:34:54.009662   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:35:21.712462   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:39:54.009620   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.957070958s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-157940
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image rm kicbase/echo-server:functional-157940 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 service list: (1.705697088s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-157940 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-157940 service list -o json: (1.722452679s)
functional_test.go:1504: Took "1.722557607s" to run "out/minikube-linux-amd64 -p functional-157940 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-157940
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-157940
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-157940
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (145.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m24.545811202s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (145.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 kubectl -- rollout status deployment/busybox: (3.741519995s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-5k7dd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-bvz5j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-clxq8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-5k7dd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-bvz5j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-clxq8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-5k7dd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-bvz5j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-clxq8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-5k7dd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-5k7dd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-bvz5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-bvz5j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-clxq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 kubectl -- exec busybox-7b57f96db7-clxq8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 node add --alsologtostderr -v 5: (26.645316984s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-876620 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp testdata/cp-test.txt ha-876620:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1573671500/001/cp-test_ha-876620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620:/home/docker/cp-test.txt ha-876620-m02:/home/docker/cp-test_ha-876620_ha-876620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test_ha-876620_ha-876620-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620:/home/docker/cp-test.txt ha-876620-m03:/home/docker/cp-test_ha-876620_ha-876620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test_ha-876620_ha-876620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620:/home/docker/cp-test.txt ha-876620-m04:/home/docker/cp-test_ha-876620_ha-876620-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test_ha-876620_ha-876620-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp testdata/cp-test.txt ha-876620-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1573671500/001/cp-test_ha-876620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m02:/home/docker/cp-test.txt ha-876620:/home/docker/cp-test_ha-876620-m02_ha-876620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test_ha-876620-m02_ha-876620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m02:/home/docker/cp-test.txt ha-876620-m03:/home/docker/cp-test_ha-876620-m02_ha-876620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test_ha-876620-m02_ha-876620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m02:/home/docker/cp-test.txt ha-876620-m04:/home/docker/cp-test_ha-876620-m02_ha-876620-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test_ha-876620-m02_ha-876620-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp testdata/cp-test.txt ha-876620-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1573671500/001/cp-test_ha-876620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m03:/home/docker/cp-test.txt ha-876620:/home/docker/cp-test_ha-876620-m03_ha-876620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test_ha-876620-m03_ha-876620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m03:/home/docker/cp-test.txt ha-876620-m02:/home/docker/cp-test_ha-876620-m03_ha-876620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test_ha-876620-m03_ha-876620-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m03:/home/docker/cp-test.txt ha-876620-m04:/home/docker/cp-test_ha-876620-m03_ha-876620-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test_ha-876620-m03_ha-876620-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp testdata/cp-test.txt ha-876620-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1573671500/001/cp-test_ha-876620-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m04:/home/docker/cp-test.txt ha-876620:/home/docker/cp-test_ha-876620-m04_ha-876620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620 "sudo cat /home/docker/cp-test_ha-876620-m04_ha-876620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m04:/home/docker/cp-test.txt ha-876620-m02:/home/docker/cp-test_ha-876620-m04_ha-876620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m02 "sudo cat /home/docker/cp-test_ha-876620-m04_ha-876620-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 cp ha-876620-m04:/home/docker/cp-test.txt ha-876620-m03:/home/docker/cp-test_ha-876620-m04_ha-876620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 ssh -n ha-876620-m03 "sudo cat /home/docker/cp-test_ha-876620-m04_ha-876620-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 node stop m02 --alsologtostderr -v 5: (13.079840605s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
E1123 09:44:54.009822   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5: exit status 7 (702.452168ms)

                                                
                                                
-- stdout --
	ha-876620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-876620-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876620-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-876620-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:44:53.891026  133008 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:44:53.891477  133008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:44:53.891487  133008 out.go:374] Setting ErrFile to fd 2...
	I1123 09:44:53.891492  133008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:44:53.891722  133008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:44:53.891919  133008 out.go:368] Setting JSON to false
	I1123 09:44:53.891953  133008 mustload.go:66] Loading cluster: ha-876620
	I1123 09:44:53.892023  133008 notify.go:221] Checking for updates...
	I1123 09:44:53.892481  133008 config.go:182] Loaded profile config "ha-876620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:44:53.892510  133008 status.go:174] checking status of ha-876620 ...
	I1123 09:44:53.893156  133008 cli_runner.go:164] Run: docker container inspect ha-876620 --format={{.State.Status}}
	I1123 09:44:53.915473  133008 status.go:371] ha-876620 host status = "Running" (err=<nil>)
	I1123 09:44:53.915499  133008 host.go:66] Checking if "ha-876620" exists ...
	I1123 09:44:53.915797  133008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876620
	I1123 09:44:53.932749  133008 host.go:66] Checking if "ha-876620" exists ...
	I1123 09:44:53.932969  133008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:44:53.933011  133008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876620
	I1123 09:44:53.950455  133008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/ha-876620/id_rsa Username:docker}
	I1123 09:44:54.048702  133008 ssh_runner.go:195] Run: systemctl --version
	I1123 09:44:54.055029  133008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:44:54.068117  133008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:44:54.131040  133008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:44:54.121540438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:44:54.131691  133008 kubeconfig.go:125] found "ha-876620" server: "https://192.168.49.254:8443"
	I1123 09:44:54.131723  133008 api_server.go:166] Checking apiserver status ...
	I1123 09:44:54.131766  133008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:44:54.143670  133008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	W1123 09:44:54.151968  133008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:44:54.152034  133008 ssh_runner.go:195] Run: ls
	I1123 09:44:54.155691  133008 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:44:54.159878  133008 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:44:54.159900  133008 status.go:463] ha-876620 apiserver status = Running (err=<nil>)
	I1123 09:44:54.159913  133008 status.go:176] ha-876620 status: &{Name:ha-876620 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:44:54.159933  133008 status.go:174] checking status of ha-876620-m02 ...
	I1123 09:44:54.160199  133008 cli_runner.go:164] Run: docker container inspect ha-876620-m02 --format={{.State.Status}}
	I1123 09:44:54.177099  133008 status.go:371] ha-876620-m02 host status = "Stopped" (err=<nil>)
	I1123 09:44:54.177123  133008 status.go:384] host is not running, skipping remaining checks
	I1123 09:44:54.177130  133008 status.go:176] ha-876620-m02 status: &{Name:ha-876620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:44:54.177148  133008 status.go:174] checking status of ha-876620-m03 ...
	I1123 09:44:54.177379  133008 cli_runner.go:164] Run: docker container inspect ha-876620-m03 --format={{.State.Status}}
	I1123 09:44:54.193902  133008 status.go:371] ha-876620-m03 host status = "Running" (err=<nil>)
	I1123 09:44:54.193923  133008 host.go:66] Checking if "ha-876620-m03" exists ...
	I1123 09:44:54.194188  133008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876620-m03
	I1123 09:44:54.211127  133008 host.go:66] Checking if "ha-876620-m03" exists ...
	I1123 09:44:54.211351  133008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:44:54.211391  133008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876620-m03
	I1123 09:44:54.228244  133008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/ha-876620-m03/id_rsa Username:docker}
	I1123 09:44:54.327596  133008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:44:54.340251  133008 kubeconfig.go:125] found "ha-876620" server: "https://192.168.49.254:8443"
	I1123 09:44:54.340280  133008 api_server.go:166] Checking apiserver status ...
	I1123 09:44:54.340320  133008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:44:54.351147  133008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup
	W1123 09:44:54.360316  133008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:44:54.360369  133008 ssh_runner.go:195] Run: ls
	I1123 09:44:54.364241  133008 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:44:54.368379  133008 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:44:54.368409  133008 status.go:463] ha-876620-m03 apiserver status = Running (err=<nil>)
	I1123 09:44:54.368422  133008 status.go:176] ha-876620-m03 status: &{Name:ha-876620-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:44:54.368452  133008 status.go:174] checking status of ha-876620-m04 ...
	I1123 09:44:54.368774  133008 cli_runner.go:164] Run: docker container inspect ha-876620-m04 --format={{.State.Status}}
	I1123 09:44:54.386702  133008 status.go:371] ha-876620-m04 host status = "Running" (err=<nil>)
	I1123 09:44:54.386725  133008 host.go:66] Checking if "ha-876620-m04" exists ...
	I1123 09:44:54.386972  133008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876620-m04
	I1123 09:44:54.404298  133008 host.go:66] Checking if "ha-876620-m04" exists ...
	I1123 09:44:54.404564  133008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:44:54.404602  133008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876620-m04
	I1123 09:44:54.422728  133008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/ha-876620-m04/id_rsa Username:docker}
	I1123 09:44:54.520388  133008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:44:54.532700  133008 status.go:176] ha-876620-m04 status: &{Name:ha-876620-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 node start m02 --alsologtostderr -v 5: (7.826280449s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 stop --alsologtostderr -v 5: (51.907449028s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 start --wait true --alsologtostderr -v 5
E1123 09:46:12.238316   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.244758   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.256144   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.277605   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.319045   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.400594   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.561952   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:12.883725   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:13.525210   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:14.807382   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:17.074683   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:17.369151   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:22.490966   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:46:32.733018   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 start --wait true --alsologtostderr -v 5: (55.499531396s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node delete m03 --alsologtostderr -v 5
E1123 09:46:53.214454   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 node delete m03 --alsologtostderr -v 5: (9.757385904s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 stop --alsologtostderr -v 5
E1123 09:47:34.176521   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 stop --alsologtostderr -v 5: (42.827455377s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5: exit status 7 (117.885533ms)

                                                
                                                
-- stdout --
	ha-876620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876620-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876620-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:47:46.857805  147124 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:47:46.857945  147124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:47:46.857955  147124 out.go:374] Setting ErrFile to fd 2...
	I1123 09:47:46.857959  147124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:47:46.858162  147124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:47:46.858336  147124 out.go:368] Setting JSON to false
	I1123 09:47:46.858364  147124 mustload.go:66] Loading cluster: ha-876620
	I1123 09:47:46.858419  147124 notify.go:221] Checking for updates...
	I1123 09:47:46.858679  147124 config.go:182] Loaded profile config "ha-876620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:47:46.858696  147124 status.go:174] checking status of ha-876620 ...
	I1123 09:47:46.859248  147124 cli_runner.go:164] Run: docker container inspect ha-876620 --format={{.State.Status}}
	I1123 09:47:46.877537  147124 status.go:371] ha-876620 host status = "Stopped" (err=<nil>)
	I1123 09:47:46.877557  147124 status.go:384] host is not running, skipping remaining checks
	I1123 09:47:46.877563  147124 status.go:176] ha-876620 status: &{Name:ha-876620 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:47:46.877602  147124 status.go:174] checking status of ha-876620-m02 ...
	I1123 09:47:46.877831  147124 cli_runner.go:164] Run: docker container inspect ha-876620-m02 --format={{.State.Status}}
	I1123 09:47:46.895926  147124 status.go:371] ha-876620-m02 host status = "Stopped" (err=<nil>)
	I1123 09:47:46.895947  147124 status.go:384] host is not running, skipping remaining checks
	I1123 09:47:46.895955  147124 status.go:176] ha-876620-m02 status: &{Name:ha-876620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:47:46.895978  147124 status.go:174] checking status of ha-876620-m04 ...
	I1123 09:47:46.896297  147124 cli_runner.go:164] Run: docker container inspect ha-876620-m04 --format={{.State.Status}}
	I1123 09:47:46.914016  147124 status.go:371] ha-876620-m04 host status = "Stopped" (err=<nil>)
	I1123 09:47:46.914041  147124 status.go:384] host is not running, skipping remaining checks
	I1123 09:47:46.914050  147124 status.go:176] ha-876620-m04 status: &{Name:ha-876620-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.708904103s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 node add --control-plane --alsologtostderr -v 5
E1123 09:48:56.098469   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:49:54.010004   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-876620 node add --control-plane --alsologtostderr -v 5: (1m10.420482504s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-876620 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-066750 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 09:51:12.244644   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-066750 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.314506823s)
--- PASS: TestJSONOutput/start/Command (69.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-066750 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-066750 --output=json --user=testUser: (7.987927307s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-697585 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-697585 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.350396ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6de22c74-66b8-4267-9ef0-813a14bfbda4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-697585] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4e9570e-065f-42e9-8a11-b3d2f6a0026a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"74fe2cc6-3b46-4e81-ad95-0a4e8fe9ee47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"179b28e4-cc60-4f48-bab1-31b9c75bbf20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig"}}
	{"specversion":"1.0","id":"f13722c8-3738-4c2d-a082-b427bc724ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube"}}
	{"specversion":"1.0","id":"e49b20dd-e13a-43eb-a08e-9962c4630ab1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9381407a-b584-483a-8941-c4779412dfa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd83cab8-3b61-4027-b59c-1bb8bc8b5e1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-697585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-697585
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-652141 --network=
E1123 09:51:39.944838   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-652141 --network=: (32.957432513s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-652141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-652141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-652141: (2.116807911s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-239525 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-239525 --network=bridge: (20.646549411s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-239525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-239525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-239525: (1.971877162s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.64s)

                                                
                                    
x
+
TestKicExistingNetwork (25.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 09:52:30.708364   67870 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 09:52:30.725329   67870 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 09:52:30.725388   67870 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 09:52:30.725405   67870 cli_runner.go:164] Run: docker network inspect existing-network
W1123 09:52:30.740983   67870 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 09:52:30.741015   67870 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 09:52:30.741036   67870 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 09:52:30.741171   67870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 09:52:30.757416   67870 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9af1e2c0d039 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:86:44:24:e5:b5} reservation:<nil>}
I1123 09:52:30.757730   67870 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b512c0}
I1123 09:52:30.757776   67870 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 09:52:30.757840   67870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 09:52:30.803651   67870 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-680384 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-680384 --network=existing-network: (23.817711967s)
helpers_test.go:175: Cleaning up "existing-network-680384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-680384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-680384: (1.974086371s)
I1123 09:52:56.612473   67870 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.92s)

                                                
                                    
x
+
TestKicCustomSubnet (23.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-048845 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-048845 --subnet=192.168.60.0/24: (21.111888621s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-048845 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-048845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-048845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-048845: (2.114248762s)
--- PASS: TestKicCustomSubnet (23.25s)

                                                
                                    
x
+
TestKicStaticIP (27.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-497610 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-497610 --static-ip=192.168.200.200: (25.192283792s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-497610 ip
helpers_test.go:175: Cleaning up "static-ip-497610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-497610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-497610: (2.079359451s)
--- PASS: TestKicStaticIP (27.42s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-741257 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-741257 --driver=docker  --container-runtime=crio: (23.510617432s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-744109 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-744109 --driver=docker  --container-runtime=crio: (23.1070702s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-741257
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-744109
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-744109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-744109
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-744109: (2.36914209s)
helpers_test.go:175: Cleaning up "first-741257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-741257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-741257: (2.308544812s)
--- PASS: TestMinikubeProfile (52.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-581984 --memory=3072 --mount-string /tmp/TestMountStartserial3125219708/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-581984 --memory=3072 --mount-string /tmp/TestMountStartserial3125219708/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.797103685s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-581984 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-599972 --memory=3072 --mount-string /tmp/TestMountStartserial3125219708/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-599972 --memory=3072 --mount-string /tmp/TestMountStartserial3125219708/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.749766853s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-599972 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-581984 --alsologtostderr -v=5
E1123 09:54:54.009320   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-581984 --alsologtostderr -v=5: (1.662388723s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-599972 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-599972
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-599972: (1.257103245s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-599972
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-599972: (6.877932649s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-599972 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-891772 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-891772 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.863130324s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- rollout status deployment/busybox
E1123 09:56:12.238289   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-891772 -- rollout status deployment/busybox: (2.838373189s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-7tzsm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-z9744 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-7tzsm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-z9744 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-7tzsm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-z9744 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-7tzsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-7tzsm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-z9744 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-891772 -- exec busybox-7b57f96db7-z9744 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-891772 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-891772 -v=5 --alsologtostderr: (23.430818255s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-891772 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp testdata/cp-test.txt multinode-891772:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3218142913/001/cp-test_multinode-891772.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772:/home/docker/cp-test.txt multinode-891772-m02:/home/docker/cp-test_multinode-891772_multinode-891772-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test_multinode-891772_multinode-891772-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772:/home/docker/cp-test.txt multinode-891772-m03:/home/docker/cp-test_multinode-891772_multinode-891772-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test_multinode-891772_multinode-891772-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp testdata/cp-test.txt multinode-891772-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3218142913/001/cp-test_multinode-891772-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m02:/home/docker/cp-test.txt multinode-891772:/home/docker/cp-test_multinode-891772-m02_multinode-891772.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test_multinode-891772-m02_multinode-891772.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m02:/home/docker/cp-test.txt multinode-891772-m03:/home/docker/cp-test_multinode-891772-m02_multinode-891772-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test_multinode-891772-m02_multinode-891772-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp testdata/cp-test.txt multinode-891772-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3218142913/001/cp-test_multinode-891772-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m03:/home/docker/cp-test.txt multinode-891772:/home/docker/cp-test_multinode-891772-m03_multinode-891772.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772 "sudo cat /home/docker/cp-test_multinode-891772-m03_multinode-891772.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 cp multinode-891772-m03:/home/docker/cp-test.txt multinode-891772-m02:/home/docker/cp-test_multinode-891772-m03_multinode-891772-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 ssh -n multinode-891772-m02 "sudo cat /home/docker/cp-test_multinode-891772-m03_multinode-891772-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-891772 node stop m03: (1.255919758s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-891772 status: exit status 7 (500.493492ms)

                                                
                                                
-- stdout --
	multinode-891772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-891772-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-891772-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr: exit status 7 (494.858964ms)

                                                
                                                
-- stdout --
	multinode-891772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-891772-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-891772-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:56:51.022221  206914 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:56:51.022475  206914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:56:51.022484  206914 out.go:374] Setting ErrFile to fd 2...
	I1123 09:56:51.022489  206914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:56:51.022708  206914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:56:51.022890  206914 out.go:368] Setting JSON to false
	I1123 09:56:51.022919  206914 mustload.go:66] Loading cluster: multinode-891772
	I1123 09:56:51.023044  206914 notify.go:221] Checking for updates...
	I1123 09:56:51.023823  206914 config.go:182] Loaded profile config "multinode-891772": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:56:51.023864  206914 status.go:174] checking status of multinode-891772 ...
	I1123 09:56:51.024975  206914 cli_runner.go:164] Run: docker container inspect multinode-891772 --format={{.State.Status}}
	I1123 09:56:51.042794  206914 status.go:371] multinode-891772 host status = "Running" (err=<nil>)
	I1123 09:56:51.042820  206914 host.go:66] Checking if "multinode-891772" exists ...
	I1123 09:56:51.043154  206914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891772
	I1123 09:56:51.061698  206914 host.go:66] Checking if "multinode-891772" exists ...
	I1123 09:56:51.061929  206914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:56:51.061966  206914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891772
	I1123 09:56:51.078163  206914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/multinode-891772/id_rsa Username:docker}
	I1123 09:56:51.176082  206914 ssh_runner.go:195] Run: systemctl --version
	I1123 09:56:51.182318  206914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:56:51.193990  206914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:56:51.250549  206914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 09:56:51.24125094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:56:51.251065  206914 kubeconfig.go:125] found "multinode-891772" server: "https://192.168.67.2:8443"
	I1123 09:56:51.251112  206914 api_server.go:166] Checking apiserver status ...
	I1123 09:56:51.251154  206914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:56:51.262542  206914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W1123 09:56:51.270714  206914 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:56:51.270767  206914 ssh_runner.go:195] Run: ls
	I1123 09:56:51.274326  206914 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 09:56:51.278286  206914 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 09:56:51.278305  206914 status.go:463] multinode-891772 apiserver status = Running (err=<nil>)
	I1123 09:56:51.278313  206914 status.go:176] multinode-891772 status: &{Name:multinode-891772 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:56:51.278341  206914 status.go:174] checking status of multinode-891772-m02 ...
	I1123 09:56:51.278602  206914 cli_runner.go:164] Run: docker container inspect multinode-891772-m02 --format={{.State.Status}}
	I1123 09:56:51.295330  206914 status.go:371] multinode-891772-m02 host status = "Running" (err=<nil>)
	I1123 09:56:51.295352  206914 host.go:66] Checking if "multinode-891772-m02" exists ...
	I1123 09:56:51.295625  206914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891772-m02
	I1123 09:56:51.312323  206914 host.go:66] Checking if "multinode-891772-m02" exists ...
	I1123 09:56:51.312626  206914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:56:51.312674  206914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891772-m02
	I1123 09:56:51.330417  206914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21968-64343/.minikube/machines/multinode-891772-m02/id_rsa Username:docker}
	I1123 09:56:51.428243  206914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:56:51.440048  206914 status.go:176] multinode-891772-m02 status: &{Name:multinode-891772-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:56:51.440083  206914 status.go:174] checking status of multinode-891772-m03 ...
	I1123 09:56:51.440371  206914 cli_runner.go:164] Run: docker container inspect multinode-891772-m03 --format={{.State.Status}}
	I1123 09:56:51.457608  206914 status.go:371] multinode-891772-m03 host status = "Stopped" (err=<nil>)
	I1123 09:56:51.457627  206914 status.go:384] host is not running, skipping remaining checks
	I1123 09:56:51.457635  206914 status.go:176] multinode-891772-m03 status: &{Name:multinode-891772-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-891772 node start m03 -v=5 --alsologtostderr: (6.479592556s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-891772
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-891772
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-891772: (31.33675331s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-891772 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-891772 --wait=true -v=5 --alsologtostderr: (47.883639857s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-891772
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-891772 node delete m03: (4.644090062s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-891772 stop: (30.069767094s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-891772 status: exit status 7 (103.985151ms)

                                                
                                                
-- stdout --
	multinode-891772
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-891772-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr: exit status 7 (99.816021ms)

                                                
                                                
-- stdout --
	multinode-891772
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-891772-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:58:53.460404  216770 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:58:53.460661  216770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:53.460671  216770 out.go:374] Setting ErrFile to fd 2...
	I1123 09:58:53.460675  216770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:53.460899  216770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 09:58:53.461136  216770 out.go:368] Setting JSON to false
	I1123 09:58:53.461169  216770 mustload.go:66] Loading cluster: multinode-891772
	I1123 09:58:53.461293  216770 notify.go:221] Checking for updates...
	I1123 09:58:53.461641  216770 config.go:182] Loaded profile config "multinode-891772": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:58:53.461667  216770 status.go:174] checking status of multinode-891772 ...
	I1123 09:58:53.462384  216770 cli_runner.go:164] Run: docker container inspect multinode-891772 --format={{.State.Status}}
	I1123 09:58:53.482960  216770 status.go:371] multinode-891772 host status = "Stopped" (err=<nil>)
	I1123 09:58:53.482987  216770 status.go:384] host is not running, skipping remaining checks
	I1123 09:58:53.482993  216770 status.go:176] multinode-891772 status: &{Name:multinode-891772 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:58:53.483023  216770 status.go:174] checking status of multinode-891772-m02 ...
	I1123 09:58:53.483283  216770 cli_runner.go:164] Run: docker container inspect multinode-891772-m02 --format={{.State.Status}}
	I1123 09:58:53.501279  216770 status.go:371] multinode-891772-m02 host status = "Stopped" (err=<nil>)
	I1123 09:58:53.501303  216770 status.go:384] host is not running, skipping remaining checks
	I1123 09:58:53.501309  216770 status.go:176] multinode-891772-m02 status: &{Name:multinode-891772-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-891772 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-891772 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.121404433s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-891772 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-891772
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-891772-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-891772-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.402754ms)

                                                
                                                
-- stdout --
	* [multinode-891772-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-891772-m02' is duplicated with machine name 'multinode-891772-m02' in profile 'multinode-891772'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-891772-m03 --driver=docker  --container-runtime=crio
E1123 09:59:54.010359   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-891772-m03 --driver=docker  --container-runtime=crio: (20.254343041s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-891772
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-891772: exit status 80 (301.327969ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-891772 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-891772-m03 already exists in multinode-891772-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-891772-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-891772-m03: (2.382279128s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.08s)

                                                
                                    
x
+
TestScheduledStopUnix (97.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-474690 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-474690 --memory=3072 --driver=docker  --container-runtime=crio: (21.948996502s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-474690 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:07:52.585076  235231 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:07:52.585229  235231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:07:52.585241  235231 out.go:374] Setting ErrFile to fd 2...
	I1123 10:07:52.585245  235231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:07:52.585493  235231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:07:52.585787  235231 out.go:368] Setting JSON to false
	I1123 10:07:52.585904  235231 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:07:52.586294  235231 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:07:52.586374  235231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/config.json ...
	I1123 10:07:52.586580  235231 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:07:52.586691  235231 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-474690 -n scheduled-stop-474690
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:07:52.981876  235379 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:07:52.981989  235379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:07:52.982000  235379 out.go:374] Setting ErrFile to fd 2...
	I1123 10:07:52.982007  235379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:07:52.982233  235379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:07:52.982501  235379 out.go:368] Setting JSON to false
	I1123 10:07:52.982707  235379 daemonize_unix.go:73] killing process 235265 as it is an old scheduled stop
	I1123 10:07:52.982821  235379 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:07:52.983205  235379 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:07:52.983297  235379 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/config.json ...
	I1123 10:07:52.983488  235379 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:07:52.983613  235379 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 10:07:52.988727   67870 retry.go:31] will retry after 63.017µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.989900   67870 retry.go:31] will retry after 164.191µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.991045   67870 retry.go:31] will retry after 239.398µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.992229   67870 retry.go:31] will retry after 437.398µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.993359   67870 retry.go:31] will retry after 752.322µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.994502   67870 retry.go:31] will retry after 858.289µs: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.995649   67870 retry.go:31] will retry after 1.470424ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:52.997845   67870 retry.go:31] will retry after 1.452926ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.000116   67870 retry.go:31] will retry after 2.624157ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.003324   67870 retry.go:31] will retry after 1.951804ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.005524   67870 retry.go:31] will retry after 8.205445ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.014718   67870 retry.go:31] will retry after 5.156666ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.020927   67870 retry.go:31] will retry after 11.639978ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.033155   67870 retry.go:31] will retry after 25.177807ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.059449   67870 retry.go:31] will retry after 40.97245ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
I1123 10:07:53.100711   67870 retry.go:31] will retry after 38.755843ms: open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-474690 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-474690 -n scheduled-stop-474690
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-474690
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-474690 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:08:18.890210  236019 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:08:18.890326  236019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:08:18.890338  236019 out.go:374] Setting ErrFile to fd 2...
	I1123 10:08:18.890343  236019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:08:18.890507  236019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:08:18.890730  236019 out.go:368] Setting JSON to false
	I1123 10:08:18.890811  236019 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:08:18.891109  236019 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:08:18.891169  236019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/scheduled-stop-474690/config.json ...
	I1123 10:08:18.891346  236019 mustload.go:66] Loading cluster: scheduled-stop-474690
	I1123 10:08:18.891454  236019 config.go:182] Loaded profile config "scheduled-stop-474690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-474690
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-474690: exit status 7 (83.564607ms)

                                                
                                                
-- stdout --
	scheduled-stop-474690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-474690 -n scheduled-stop-474690
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-474690 -n scheduled-stop-474690: exit status 7 (80.006593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-474690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-474690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-474690: (3.860925885s)
--- PASS: TestScheduledStopUnix (97.34s)

                                                
                                    
x
+
TestInsufficientStorage (12.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-001514 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-001514 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.79675445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ea5cc505-4e5d-470d-975b-4d4fa546a602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-001514] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c4dbc48-6774-42ed-928f-09e938fda90d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"1b37782f-dee5-469a-8df4-5eafa08f0d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab68964e-6e8a-4d7b-ab22-73574099a879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig"}}
	{"specversion":"1.0","id":"ed4f912c-c35d-417f-8575-70a04db8e3b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube"}}
	{"specversion":"1.0","id":"d1252b7e-ac30-4b07-859b-3990aec9d91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8affd91d-f23c-4b9c-a9d2-95761826d69f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05a4a2ac-fcd6-4b03-8ba1-47de282d553d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9726f0eb-5415-42eb-a670-8c80d01c9910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"08152f75-8dad-4167-874b-c7b455f5aabe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad2039f1-1930-41ab-857c-cceef6fe1cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e88d2bf0-e2ab-423a-9326-2f64fb28879f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-001514\" primary control-plane node in \"insufficient-storage-001514\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"547a93ea-fc61-4653-9e82-c4053d2b8a6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"36f39b3d-de05-43ea-91c1-4dc5eb795f31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2549e0ba-9d7e-4238-a7de-e4328e290459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-001514 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-001514 --output=json --layout=cluster: exit status 7 (296.814906ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001514","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001514","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 10:09:18.001559  238550 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-001514" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-001514 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-001514 --output=json --layout=cluster: exit status 7 (294.100758ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001514","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001514","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 10:09:18.296548  238658 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-001514" does not appear in /home/jenkins/minikube-integration/21968-64343/kubeconfig
	E1123 10:09:18.306619  238658 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/insufficient-storage-001514/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-001514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-001514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-001514: (1.881160437s)
--- PASS: TestInsufficientStorage (12.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3368252399 start -p running-upgrade-008611 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3368252399 start -p running-upgrade-008611 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.627887929s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-008611 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-008611 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.73077536s)
helpers_test.go:175: Cleaning up "running-upgrade-008611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-008611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-008611: (2.529536213s)
--- PASS: TestRunningBinaryUpgrade (49.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (313.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.280552795s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-069634
E1123 10:09:54.009476   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-069634: (2.933755455s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-069634 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-069634 status --format={{.Host}}: exit status 7 (105.154474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.701842413s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-069634 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (102.511393ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069634] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-069634
	    minikube start -p kubernetes-upgrade-069634 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0696342 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-069634 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069634 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.792685673s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-069634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-069634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-069634: (3.090672698s)
--- PASS: TestKubernetesUpgrade (313.09s)

                                                
                                    
x
+
TestMissingContainerUpgrade (135.79s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3795920429 start -p missing-upgrade-417054 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3795920429 start -p missing-upgrade-417054 --memory=3072 --driver=docker  --container-runtime=crio: (1m22.478705506s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-417054
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-417054: (10.429868343s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-417054
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-417054 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-417054 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.086919607s)
helpers_test.go:175: Cleaning up "missing-upgrade-417054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-417054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-417054: (2.336207975s)
--- PASS: TestMissingContainerUpgrade (135.79s)

                                                
                                    
x
+
TestPause/serial/Start (49.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-528307 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-528307 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.581579862s)
--- PASS: TestPause/serial/Start (49.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (9.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-528307 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-528307 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (9.053075474s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (9.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (86.036843ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-045033] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.7989567s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045033 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-791161 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-791161 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (175.822977ms)

                                                
                                                
-- stdout --
	* [false-791161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:10:49.019156  260648 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:49.019447  260648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:49.019458  260648 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:49.019464  260648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:49.019682  260648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-64343/.minikube/bin
	I1123 10:10:49.020311  260648 out.go:368] Setting JSON to false
	I1123 10:10:49.021699  260648 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10390,"bootTime":1763882259,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:10:49.021773  260648 start.go:143] virtualization: kvm guest
	I1123 10:10:49.023539  260648 out.go:179] * [false-791161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:10:49.025069  260648 notify.go:221] Checking for updates...
	I1123 10:10:49.025101  260648 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:10:49.026368  260648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:49.027570  260648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-64343/kubeconfig
	I1123 10:10:49.028636  260648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-64343/.minikube
	I1123 10:10:49.029681  260648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:10:49.031374  260648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:10:49.032836  260648 config.go:182] Loaded profile config "NoKubernetes-045033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:49.032956  260648 config.go:182] Loaded profile config "kubernetes-upgrade-069634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:49.033106  260648 config.go:182] Loaded profile config "missing-upgrade-417054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1123 10:10:49.033231  260648 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:49.059427  260648 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 10:10:49.059632  260648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:49.119957  260648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-23 10:10:49.110475019 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 10:10:49.120109  260648 docker.go:319] overlay module found
	I1123 10:10:49.121696  260648 out.go:179] * Using the docker driver based on user configuration
	I1123 10:10:49.122712  260648 start.go:309] selected driver: docker
	I1123 10:10:49.122727  260648 start.go:927] validating driver "docker" against <nil>
	I1123 10:10:49.122741  260648 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:49.124572  260648 out.go:203] 
	W1123 10:10:49.125519  260648 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 10:10:49.126486  260648 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-791161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-069634
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-417054
contexts:
- context:
cluster: kubernetes-upgrade-069634
user: kubernetes-upgrade-069634
name: kubernetes-upgrade-069634
- context:
cluster: missing-upgrade-417054
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-417054
name: missing-upgrade-417054
current-context: missing-upgrade-417054
kind: Config
users:
- name: kubernetes-upgrade-069634
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.key
- name: missing-upgrade-417054
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-791161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791161"

                                                
                                                
----------------------- debugLogs end: false-791161 [took: 3.338530936s] --------------------------------
helpers_test.go:175: Cleaning up "false-791161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-791161
--- PASS: TestNetworkPlugins/group/false (3.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.945769661s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045033 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-045033 status -o json: exit status 2 (317.33355ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-045033","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-045033
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-045033: (1.95630005s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1123 10:11:12.237721   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.158928486s)
--- PASS: TestNoKubernetes/serial/Start (4.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21968-64343/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045033 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045033 "sudo systemctl is-active --quiet service kubelet": exit status 1 (345.147721ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (18.811198762s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (19.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-045033
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-045033: (1.301996379s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045033 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045033 --driver=docker  --container-runtime=crio: (7.577161048s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045033 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045033 "sudo systemctl is-active --quiet service kubelet": exit status 1 (331.611884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (38.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2838347682 start -p stopped-upgrade-740340 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2838347682 start -p stopped-upgrade-740340 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.433248091s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2838347682 -p stopped-upgrade-740340 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2838347682 -p stopped-upgrade-740340 stop: (2.292356558s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-740340 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-740340 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.857541005s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (38.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (45.076178725s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-740340
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-740340: (1.00626343s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.110966668s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-791161 "pgrep -a kubelet"
I1123 10:13:39.467632   67870 config.go:182] Loaded profile config "auto-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7p224" [c9ca9467-7fa2-4f81-a255-6fbfc19914f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7p224" [c9ca9467-7fa2-4f81-a255-6fbfc19914f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003769593s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5bcjj" [64b81a0c-8539-4a1f-89f0-06fbbc58ab2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004269172s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-791161 "pgrep -a kubelet"
I1123 10:13:48.384278   67870 config.go:182] Loaded profile config "kindnet-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v5982" [7b55d99b-e5ee-4db2-a721-fa1e39552c2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v5982" [7b55d99b-e5ee-4db2-a721-fa1e39552c2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004736021s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.744511307s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.418793658s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.769459836s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1123 10:14:54.009379   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/addons-768607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.043894187s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vn8fj" [6608d2da-2d9f-4598-aa62-76c9ae5bc484] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-vn8fj" [6608d2da-2d9f-4598-aa62-76c9ae5bc484] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004135118s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-791161 "pgrep -a kubelet"
I1123 10:15:05.319632   67870 config.go:182] Loaded profile config "calico-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bxk87" [b88feb27-b62e-45e0-ad19-af148e16f921] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bxk87" [b88feb27-b62e-45e0-ad19-af148e16f921] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003007396s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-791161 "pgrep -a kubelet"
I1123 10:15:10.020266   67870 config.go:182] Loaded profile config "custom-flannel-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qss7s" [95b543fb-58bd-47a8-ba8d-733202859358] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qss7s" [95b543fb-58bd-47a8-ba8d-733202859358] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003345799s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-791161 "pgrep -a kubelet"
I1123 10:15:10.977805   67870 config.go:182] Loaded profile config "enable-default-cni-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-brvrx" [f78b759a-af53-42ca-817e-9a653d31e871] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-brvrx" [f78b759a-af53-42ca-817e-9a653d31e871] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00482798s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-82dzn" [e39cd7e9-698b-4526-a793-844e88f1cf1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006893464s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-791161 "pgrep -a kubelet"
I1123 10:15:35.692526   67870 config.go:182] Loaded profile config "flannel-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-whqz6" [1ad889fc-f32f-404e-ac80-811264d0ced8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-whqz6" [1ad889fc-f32f-404e-ac80-811264d0ced8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.005377279s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-791161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.84306502s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-990757 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-990757 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.969005058s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.495475328s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:16:12.237848   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.402763294s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-990757 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f5410b61-89c3-4f61-ae72-922d00c885eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f5410b61-89c3-4f61-ae72-922d00c885eb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003322973s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-990757 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-541522 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ea00f8c7-1f30-4a4a-87f5-a86e0f94c3be] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.002977557s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-541522 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-791161 "pgrep -a kubelet"
I1123 10:16:44.298379   67870 config.go:182] Loaded profile config "bridge-791161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-791161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jr24l" [d1004422-54d9-448c-abd9-f07e4f78d052] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jr24l" [d1004422-54d9-448c-abd9-f07e4f78d052] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00406211s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-990757 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-990757 --alsologtostderr -v=3: (16.084180295s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-412306 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e] Pending
helpers_test.go:352: "busybox" [5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5b9d8e12-8c4d-4b2d-b287-4cae17b49f6e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003587563s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-412306 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-791161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-791161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-541522 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-541522 --alsologtostderr -v=3: (18.66964962s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-412306 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-412306 --alsologtostderr -v=3: (18.129881799s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757: exit status 7 (90.16205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-990757 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-990757 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-990757 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.828755978s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-990757 -n old-k8s-version-990757
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522: exit status 7 (93.446599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-541522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-541522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.17114564s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-541522 -n no-preload-541522
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.207858804s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306: exit status 7 (91.642136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-412306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412306 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.820625322s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412306 -n embed-certs-412306
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fm8f6" [ef986112-2b84-4018-a524-06c1bd693ed4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004431235s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c037ffcf-7b8b-4442-9c4e-d188a4de7b08] Pending
helpers_test.go:352: "busybox" [c037ffcf-7b8b-4442-9c4e-d188a4de7b08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c037ffcf-7b8b-4442-9c4e-d188a4de7b08] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004129439s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fm8f6" [ef986112-2b84-4018-a524-06c1bd693ed4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003322606s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-990757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2hjb" [ee7a029a-15b4-431e-9a2e-31dcbdc111bb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003277899s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dw5cf" [fbc63048-24c4-4cc1-8cf1-dcacbe4ba959] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003795609s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-990757 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2hjb" [ee7a029a-15b4-431e-9a2e-31dcbdc111bb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004375534s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-541522 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-772252 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-772252 --alsologtostderr -v=3: (18.172023825s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dw5cf" [fbc63048-24c4-4cc1-8cf1-dcacbe4ba959] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004067484s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-412306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-541522 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412306 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (25.215276707s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252: exit status 7 (78.957324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-772252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:18:39.654600   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.660971   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.672472   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.693902   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.735334   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.816762   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:39.978357   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:40.300882   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:40.942250   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-772252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.334565686s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-772252 -n default-k8s-diff-port-772252
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-956615 --alsologtostderr -v=3
E1123 10:18:43.376959   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:44.658516   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:44.786148   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:47.221220   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:49.908335   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-956615 --alsologtostderr -v=3: (8.466205428s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615: exit status 7 (80.894966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-956615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:18:52.342911   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:19:00.152242   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-956615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (9.759456593s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-956615 -n newest-cni-956615
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-956615 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cbx67" [1a366b58-3166-4114-bd99-9b1dd0648311] Running
E1123 10:19:15.308433   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/functional-157940/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003783474s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cbx67" [1a366b58-3166-4114-bd99-9b1dd0648311] Running
E1123 10:19:20.634309   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/auto-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:19:23.066678   67870 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kindnet-791161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003789792s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-772252 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-772252 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-791161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-069634
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-417054
contexts:
- context:
cluster: kubernetes-upgrade-069634
user: kubernetes-upgrade-069634
name: kubernetes-upgrade-069634
- context:
cluster: missing-upgrade-417054
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-417054
name: missing-upgrade-417054
current-context: missing-upgrade-417054
kind: Config
users:
- name: kubernetes-upgrade-069634
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.key
- name: missing-upgrade-417054
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-791161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791161"

                                                
                                                
----------------------- debugLogs end: kubenet-791161 [took: 3.710309448s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-791161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-791161
--- SKIP: TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-791161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-791161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-045033
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-069634
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-64343/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-417054
contexts:
- context:
cluster: NoKubernetes-045033
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-045033
name: NoKubernetes-045033
- context:
cluster: kubernetes-upgrade-069634
user: kubernetes-upgrade-069634
name: kubernetes-upgrade-069634
- context:
cluster: missing-upgrade-417054
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:10:45 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-417054
name: missing-upgrade-417054
current-context: NoKubernetes-045033
kind: Config
users:
- name: NoKubernetes-045033
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/NoKubernetes-045033/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/NoKubernetes-045033/client.key
- name: kubernetes-upgrade-069634
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/kubernetes-upgrade-069634/client.key
- name: missing-upgrade-417054
user:
client-certificate: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.crt
client-key: /home/jenkins/minikube-integration/21968-64343/.minikube/profiles/missing-upgrade-417054/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-791161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-791161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791161"

                                                
                                                
----------------------- debugLogs end: cilium-791161 [took: 3.624204308s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-791161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-791161
--- SKIP: TestNetworkPlugins/group/cilium (3.79s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-268907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-268907
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard